Kubernetes Backend
Run sandboxes as Kubernetes Pods on any cluster. Each sandbox is a Pod running sleep infinity that accepts commands via the K8s exec API.
Quick Start
# Create and run a sandbox on Kubernetes
agentkernel create my-sandbox --backend kubernetes --image alpine:3.20
agentkernel start my-sandbox
agentkernel exec my-sandbox -- echo "hello from k8s"
agentkernel stop my-sandbox
Or use run for ephemeral one-shot execution:
Configuration
[orchestrator]
provider = "kubernetes"
namespace = "agentkernel" # K8s namespace (default: "agentkernel")
kubeconfig = "~/.kube/config" # Optional, auto-detected
context = "my-cluster" # Optional kubeconfig context
runtime_class = "gvisor" # Optional: "gvisor", "kata"
service_account = "agentkernel-sa" # Optional service account
warm_pool_size = 10 # Pre-warmed pods (default: 10)
max_pool_size = 50 # Maximum total pods (default: 50)
| Field | Type | Default | Description |
|---|---|---|---|
namespace |
string | agentkernel |
Kubernetes namespace for sandbox pods |
kubeconfig |
string | auto-detected | Path to kubeconfig file |
context |
string | current context | Kubeconfig context to use |
runtime_class |
string | none | RuntimeClass for stronger isolation |
service_account |
string | none | Service account for sandbox pods |
warm_pool_size |
int | 10 | Number of pre-warmed idle pods |
max_pool_size |
int | 50 | Maximum concurrent pods |
Client Configuration
The Kubernetes backend resolves credentials in order:
- In-cluster service account (when running inside K8s)
kubeconfigpath from configKUBECONFIGenvironment variable~/.kube/config
Security
Each sandbox pod runs with:
privileged: falseallowPrivilegeEscalation: falserunAsNonRoot: true,runAsUser: 1000- All capabilities dropped (
drop: ["ALL"]) automountServiceAccountToken: false- Pod Security Standards:
restricted
When network: false, a NetworkPolicy is automatically created that denies all ingress and egress for the sandbox pod. The policy is cleaned up on stop.
For stronger isolation, set runtime_class to gvisor or kata to run pods in a dedicated kernel sandbox.
Warm Pool
The Kubernetes warm pool pre-creates pods labeled agentkernel/pool=warm. When you call acquire(), a warm pod is relabeled to active and returned immediately. When released, the pod is deleted and a replacement is created.
A background task runs every 30 seconds to maintain the target warm count.
Verifying with kubectl
# List agentkernel pods
kubectl get pods -n agentkernel -l agentkernel/managed-by=agentkernel
# Check a specific sandbox pod
kubectl describe pod agentkernel-my-sandbox -n agentkernel
# View pod labels
kubectl get pod agentkernel-my-sandbox -n agentkernel --show-labels
Operator and CRDs (Optional)
For Kubernetes-native management, agentkernel provides Custom Resource Definitions.
AgentSandbox CRD
apiVersion: agentkernel/v1alpha1
kind: AgentSandbox
metadata:
name: my-sandbox
spec:
image: python:3.12-alpine
vcpus: 2
memory_mb: 1024
network: true
read_only: false
runtime_class: gvisor
security_profile: moderate
env:
- name: API_KEY
value: "sk-..."
The operator watches AgentSandbox resources and creates/manages pods automatically. Status is reported back to the CR:
AgentSandboxPool CRD
apiVersion: agentkernel/v1alpha1
kind: AgentSandboxPool
metadata:
name: default-pool
spec:
warm_pool_size: 20
max_pool_size: 100
image: alpine:3.20
vcpus: 1
memory_mb: 512
AgentKernelPolicy CRD (Enterprise)
Namespaced Cedar policy that applies to sandboxes in the same namespace. Requires the enterprise feature.
apiVersion: agentkernel/v1alpha1
kind: AgentKernelPolicy
metadata:
name: deny-network-staging
namespace: staging
spec:
cedar: |
forbid(
principal,
action == AgentKernel::Action::"Network",
resource
);
priority: 100
description: "Block network access in staging"
Apply via kubectl apply — the operator validates Cedar syntax and reports status:
kubectl get akp -A # List all namespace policies
kubectl describe akp deny-network-staging -n staging
ClusterAgentKernelPolicy CRD (Enterprise)
Cluster-scoped Cedar policy that applies to all sandboxes globally.
apiVersion: agentkernel/v1alpha1
kind: ClusterAgentKernelPolicy
metadata:
name: default-permit
spec:
cedar: |
permit(
principal is AgentKernel::User,
action,
resource is AgentKernel::Sandbox
);
priority: 0
description: "Default permit for all authenticated users"
Policy Evaluation Order
- Cluster-scoped policies are loaded first (lower scope weight)
- Within the same scope, higher
priorityvalues take precedence - Cedar's default-deny model applies — if no
permitmatches, the action is denied forbidrules always overridepermitrules regardless of priority
Policy Status
The operator sets status on each policy CR:
| Field | Description |
|---|---|
valid |
Whether the Cedar syntax parsed successfully |
active |
Whether the policy is loaded in the evaluation engine |
message |
Error details when valid: false |
lastApplied |
Timestamp of last successful load |
observedGeneration |
Generation for change detection |
Identity from Sandbox Annotations
The policy engine reads principal identity from sandbox CR annotations:
| Annotation | Maps to | Default |
|---|---|---|
agentkernel/user-id |
Principal.id |
anonymous |
agentkernel/email |
Principal.email |
anonymous@unknown |
agentkernel/org-id |
Principal.org_id |
default |
agentkernel/roles |
Principal.roles (comma-separated) |
developer |
agentkernel/mfa-verified |
Principal.mfa_verified |
false |
agentkernel/agent-type |
Resource.agent_type |
unknown |
agentkernel/runtime |
Resource.runtime |
unknown |
Generating CRD Manifests
use agentkernel::backend::kubernetes_operator::generate_crd_manifests;
let crds = generate_crd_manifests()?;
for (i, crd) in crds.iter().enumerate() {
std::fs::write(format!("crd-{}.yaml", i), crd)?;
}
Or generate all CRDs at once:
Deploying agentkernel on Kubernetes
Run agentkernel itself as a Kubernetes service that manages sandbox pods via the HTTP API.
Install with Helm
# Install from OCI registry (recommended)
helm install agentkernel oci://ghcr.io/thrashr888/charts/agentkernel \
--version 0.6.0 \
--namespace agentkernel-system \
--create-namespace
Note: The OCI chart is published automatically on each release. If not yet available, use the local clone method below.
Or install from a local clone:
git clone https://github.com/thrashr888/agentkernel.git
helm install agentkernel agentkernel/deploy/helm/agentkernel/ \
--namespace agentkernel-system \
--create-namespace
Helm Values
Override defaults with --set flags or a custom values.yaml:
backend: kubernetes
orchestrator:
namespace: agentkernel-sandboxes # Where sandbox pods run
runtimeClass: "" # "gvisor" if available
warmPoolSize: 10 # Pre-warmed pods
maxSandboxes: 200 # Cluster-wide limit
serviceAccount: agentkernel-sandbox # SA for sandbox pods
sandbox:
defaults:
image: alpine:3.20
memory: 512Mi
cpu: "1"
securityProfile: restrictive
apiKey: "" # Set via --set apiKey=<key> or external secret
resources:
limits:
memory: 256Mi
cpu: 500m
requests:
memory: 128Mi
cpu: 100m
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
What the Chart Creates
| Resource | Purpose |
|---|---|
| Deployment | agentkernel API server |
| Service | ClusterIP on port 18888 |
| ServiceAccount | For the API server pod |
| ClusterRole | RBAC for managing sandbox pods |
| ClusterRoleBinding | Binds role to service account |
| ConfigMap | agentkernel.toml configuration |
| Namespace | Sandbox namespace (configurable) |
| Secret | API key (if set) |
| HPA | Horizontal Pod Autoscaler (optional) |
RBAC
The Helm chart creates a ClusterRole with permissions to:
- Create, delete, list, get pods in the sandbox namespace
- Create and delete NetworkPolicies (for
network: falsesandboxes) - Exec into pods (for
agentkernel exec) - Watch and update AgentSandbox, AgentSandboxPool CRDs
- (Enterprise) Watch and update AgentKernelPolicy, ClusterAgentKernelPolicy CRDs