Documentation

Iddio Docs

Everything you need to deploy, configure, and operate Iddio as a security gateway for your AI agents. From first install to production policy management.

Get Iddio running in under five minutes. This guide covers installation, initialization, adding your first agent, and proxying your first command.

1. Install

There are two ways to run Iddio locally: a desktop app with a GUI, or the CLI proxy.

Desktop app (macOS)

A native macOS app with a visual policy editor, approval dialogs, session viewer, and built-in terminal.

brew install --cask leonardaustin/tap/iddio-desktop

Or grab the latest .dmg from GitHub Releases (look for desktop-v* tags).

If you install the desktop app, it handles initialization and proxy management for you — skip to Step 4 to configure your policy.

CLI proxy

A single binary you run in a terminal. Lightweight, scriptable, works on macOS and Linux.

brew install leonardaustin/tap/iddio

2. Initialize Iddio

This creates the ~/.iddio/ directory with a CA, TLS certificates, default policy, and empty token store.

iddio init

# Output:
# Generated CA certificate
# Generated TLS certificate (signed by CA)
# Created policy.yaml
# Created tokens.yaml
#
# Iddio initialized at ~/.iddio

3. Add an agent

This generates a bearer token, client certificate, and kubeconfig for the agent. The kubeconfig points at the Iddio proxy, not the real cluster.

iddio agent add claude-code

# Output:
# Generated client certificate (SPIFFE: spiffe://iddio.local/agent/claude-code)
# Agent "claude-code" created
#   Token: iddio_a1b2c3d4...
#   Kubeconfig: ~/.iddio/agents/claude-code/kubeconfig
#   Client cert: ~/.iddio/agents/claude-code/agent.crt

4. Configure the policy

The default policy denies all requests. Edit ~/.iddio/policy.yaml to grant your agent access:

$EDITOR ~/.iddio/policy.yaml

A minimal working policy that allows reads and escalates writes:

default: deny

agents:
  claude-code:
    rules:
      - namespaces: ["*"]
        tiers:
          0: allow # reads auto-allowed
          2: escalate # writes need approval
          3: escalate # sensitive ops need approval
          4: deny # no break-glass

See the Policy Configuration page for multi-protocol rules, namespace globs, and more examples.

!

Changes to policy.yaml are hot-reloaded automatically. If the proxy is already running, just save the file — the new policy takes effect within a second.

5. Start the proxy

Point Iddio at your real cluster. It will listen on https://localhost:6443 by default.

iddio start --cluster-url https://your-cluster:6443 --identity-mode hybrid

# Output:
# iddio proxy listening on :6443 → https://your-cluster:6443
#   identity: hybrid
#   approval: terminal
#   credentials: kubeconfig
#   agents: claude-code
#   policy: ~/.iddio/policy.yaml
#   hot-reload: enabled (policy.yaml, tokens.yaml)
!

The default identity mode is hybrid, which accepts both mTLS client certificates and bearer tokens. Since agent add generates kubeconfigs with mTLS client certificates, this works out of the box. You can restrict to mTLS-only with --identity-mode mtls or token-only with --identity-mode token.

6. Test it

In another terminal, use the agent’s kubeconfig to run commands. Read operations flow through instantly. Write operations will prompt you for approval.

# Set the agent kubeconfig
export KUBECONFIG=~/.iddio/agents/claude-code/kubeconfig

# Read operation — auto-allowed (T0)
kubectl get pods -n payments
# NAME                    READY   STATUS    RESTARTS
# api-7d4b8f6c9-x2kl5    1/1     Running   0

# Delete operation — requires approval (T3 sensitive)
kubectl delete pod api-7d4b8f6c9-x2kl5 -n payments
# (hangs until you approve in the iddio terminal)

# In the iddio terminal you'll see:
# ⚠️  ESCALATE  [claude-code]  DELETE payments/pods
#    tier 3 (sensitive) — approve? [y/N] (60s timeout): y
!

The agent’s kubectl doesn’t know Iddio exists. From its perspective, the cluster is just sometimes slower for write operations (while awaiting approval).


Something wrong? See Troubleshooting.

Every request that flows through Iddio passes through a 5-step pipeline in the proxy’s ServeHTTP() handler. This pipeline is the core of the product.

01
IDENTITY

Extract agent identity from the request. Supports three modes: Bearer token (constant-time comparison), mTLS with SPIFFE URI from the client certificate, or hybrid (mTLS preferred, token fallback). Configure with --identity-mode token|mtls|hybrid.

02
CLASSIFY

Parse the HTTP method and Kubernetes API path to determine the risk tier. GET/HEAD/OPTIONS → T0 (observe), POST/PUT/PATCH → T2 (modify), DELETE → T3 (sensitive). Special rules: reading Secrets → T3, exec/attach/portforward → T4, RBAC/webhook/CRD/namespace/PV mutations → T4 (break-glass).

03
POLICY

Evaluate the agent + classification against the YAML policy rules. Rules are scoped by protocol (kubernetes, ssh, aws, terraform, helm) and by namespace/host/service with per-tier decisions: allow, deny, or escalate. Namespace globs are supported. If the request matches an assigned runbook pattern, the tier is downgraded to T1 (operate) before the decision lookup — enabling auto-approval for pre-approved operations.

04
FORWARD

For exec/attach upgrades: hijack the connection and proxy as a raw TCP/TLS tunnel with session recording. For all other requests: reverse proxy to the real cluster. Strips the agent’s token and uses operator credentials. In JIT mode (--credential-mode jit), creates a Just-In-Time ServiceAccount token with a configurable TTL (default 5 minutes).

05
AUDIT

Write a hash-chained JSON line to audit.jsonl. Each entry includes: timestamp, agent, protocol, HTTP method, path, tier, resource, namespace, decision, status code, latency, and a SHA-256 hash linked to the previous entry for tamper detection. Exec sessions include a session_id linking to the session recording. Runbook-matched requests include the runbook name.

Approval flow

When a command is classified as needing escalation, the proxy holds the agent’s HTTP connection open and prompts the operator for a decision. The agent’s kubectl simply waits — no polling, no request IDs, no retries. From the agent’s perspective, the cluster is just taking longer to respond.

Agent (kubectl)          Iddio Proxy              Operator Terminal
     │                        │                          │
     ├── DELETE pod/x ───────►│                          │
     │                        ├── classify → T3          │
     │                        ├── policy → escalate      │
     │   (connection held)    ├── prompt ───────────────►│
     │                        │                          │  approve? [y/N]: y
     │                        │◄── approved ─────────────┤
     │                        ├── forward to cluster     │
     │◄── 200 OK ────────────┤                          │
     │                        ├── audit log written      │

Approval can also be handled via webhook (--approval-mode webhook), which sends an HTTP POST to a configured URL (e.g., Slack) and accepts callbacks with HMAC-signed responses.

Component architecture

The classifier, policy engine, and audit logger are the core product logic. The proxy and CLI are plumbing that wires them together.

ComponentFilePurpose
Classifierinternal/classifier.goHTTP method + k8s API path → tier
Runbook Engineinternal/runbook.goPre-approved operation patterns, tier downgrade matching
Policy Engineinternal/policy.goAgent + tier + namespace → decision (multi-protocol, runbook-aware)
Audit Loggerinternal/audit.goAppend-only JSONL, hash-chained, mutex-protected
Proxyinternal/proxy.goHTTPS reverse proxy, ServeHTTP handler
Exec/Attachinternal/hijack.goWebSocket/SPDY session proxying with recording
Session Recorderinternal/session.goJSONL session recording with base64-encoded events
Identityinternal/identity.goCA, certs, mTLS/SPIFFE/hybrid authentication
Approvalinternal/approval.goTerminal and webhook approval with HMAC signing
Credentialsinternal/credentials.goStatic and JIT credential sources
CLIcmd/iddio/main.goinit, agent, start, logs, audit, sessions, runbook commands

Iddio ships as a single binary. All state is stored in ~/.iddio/.

iddio init

Initialize the Iddio configuration directory. Generates a CA, TLS certificates, creates default policy, and sets up the token store.

iddio init

# Creates:
#   ~/.iddio/ca.crt              — CA certificate
#   ~/.iddio/ca.key              — CA private key (0600 permissions)
#   ~/.iddio/tls/server.crt      — Server TLS certificate (signed by CA)
#   ~/.iddio/tls/server.key      — TLS private key (0600 permissions)
#   ~/.iddio/tls/ca.pem          — CA cert PEM for agent kubeconfigs
#   ~/.iddio/tokens.yaml         — Empty token store (0600 permissions)
#   ~/.iddio/policy.yaml         — Default deny-all policy
#   ~/.iddio/agents/             — Directory for agent credentials
!

Running iddio init when ~/.iddio/ already exists will not overwrite existing files.

iddio agent add <name>

Register a new agent and generate its credentials.

iddio agent add <agent-name>

# Example:
iddio agent add claude-code

# Creates:
#   ~/.iddio/agents/claude-code/kubeconfig  — Points at proxy (0600)
#   ~/.iddio/agents/claude-code/agent.crt   — Client cert with SPIFFE URI SAN (0600)
#   ~/.iddio/agents/claude-code/agent.key   — Client private key (0600)
#   Appends token to ~/.iddio/tokens.yaml

# The generated kubeconfig contains:
#   - cluster name: iddio
#   - server: https://localhost:6443
#   - certificate-authority-data: (iddio CA cert)
#   - user name: agent
#   - client-certificate-data / client-key-data (when CA exists)
#   - OR token: (randomly generated bearer token, when no CA)

iddio agent list

List all registered agents with their identity and creation date.

iddio agent list

# Output:
# NAME          IDENTITY                               CREATED
# claude-code   spiffe://iddio.local/agent/claude-code  2026-02-01
# cursor-ai     spiffe://iddio.local/agent/cursor-ai    2026-02-03

iddio start

Start the proxy. Loads tokens and policy from disk, sets up the cluster transport, and begins listening for agent connections. Changes to policy.yaml and tokens.yaml are hot-reloaded automatically while the proxy is running.

iddio start --cluster-url <url> [flags]

# Flags:
#   --cluster-url string           Kubernetes API server URL (required)
#   --listen string                Proxy listen address (default ":6443")
#   --kubeconfig string            Kubeconfig for the real cluster
#                                  (default: ~/.kube/config)
#   --dir string                   Config directory (default "~/.iddio")
#   --identity-mode string         Agent identity mode: token, mtls, or hybrid
#                                  (default "hybrid")
#   --approval-mode string         Approval mode: terminal or webhook
#                                  (default "terminal")
#   --webhook-url string           URL to POST approval requests to
#   --webhook-secret string        HMAC secret for signing/verifying webhook callbacks
#   --callback-url string          Base URL for approval callbacks
#   --admin-listen string          Admin API listen address (default ":8443")
#   --credential-mode string       Credential mode: kubeconfig or jit
#                                  (default "kubeconfig")
#   --in-cluster                   Use in-cluster Kubernetes config
#   --jit-namespace string         Namespace for JIT ServiceAccount (default "iddio-system")
#   --jit-service-account string   ServiceAccount name for JIT tokens
#                                  (default "iddio-proxy")
#   --jit-ttl string               TTL for JIT tokens (default "5m")

# Example:
iddio start --cluster-url https://10.0.0.1:6443

# With mTLS identity and JIT credentials:
iddio start --cluster-url https://10.0.0.1:6443 \
  --identity-mode hybrid \
  --credential-mode jit \
  --in-cluster

iddio logs

View the audit log.

iddio logs [flags]

# Flags:
#   --last int     Show last N events (default 20)
#   -f, --follow   Stream new entries (like tail -f)
#   --dir string   Config directory (default "~/.iddio")

# Examples:
iddio logs --last 20
iddio logs -f

# For advanced filtering, use jq on the raw JSONL:
jq 'select(.agent == "claude-code")' ~/.iddio/audit.jsonl
jq 'select(.decision == "deny")' ~/.iddio/audit.jsonl
jq 'select(.tier >= 2)' ~/.iddio/audit.jsonl

iddio audit verify

Verify the integrity of the hash-chained audit log. Walks the entire chain and checks that every event’s hash and prev_hash are valid.

iddio audit verify

# Output (valid):
# Checking audit chain: ~/.iddio/audit.jsonl
# Verified 1,247 events. Chain is intact.

# Output (tampered):
# ERROR: Chain broken at line 843: prev_hash mismatch.
#   Expected: a1b2c3d4...
#   Got:      e5f6a7b8...

iddio sessions list

List recorded exec/attach sessions.

iddio sessions list

# Output:
# SESSION ID                       AGENT        POD                COMMAND          STARTED              DURATION  IN     OUT
# a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4 claude-code  payments/api-7f8d9 ["/bin/sh","-c"] 2026-02-08 14:32     2m31s     1.2KB  45KB

iddio sessions replay <session-id>

Replay a recorded exec/attach session with original timing. Output is sanitized to prevent ANSI escape injection.

iddio sessions replay <session-id> [flags]

# Flags:
#   --speed float   Playback speed multiplier (default 1)

# Examples:
iddio sessions replay a1b2c3d4e5f6
iddio sessions replay a1b2c3d4e5f6 --speed 5

iddio sessions inspect <session-id>

Dump raw session events as formatted JSON for machine consumption or debugging.

iddio sessions inspect <session-id>

iddio runbook list

List all defined runbooks and which agents/namespaces they’re assigned to.

iddio runbook list

# Output:
# RUNBOOK              MAX_TIER  AGENTS
# restart-deployment   T3        claude-code (payments, api-gateway, staging-*), ops-agent (*)
# scale-deployment     T3        claude-code (payments, api-gateway, staging-*), ops-agent (*)
# debug-pod            T4        claude-code (staging-*), ops-agent (*)

iddio runbook test

Dry-run a request against the runbook engine. Shows whether the request matches a runbook and what the effective policy decision would be.

iddio runbook test --agent <name> --method <method> --resource <resource> --namespace <ns> [flags]

# Flags:
#   --agent string         Agent name (required)
#   --method string        HTTP method: GET, POST, PUT, PATCH, DELETE (required)
#   --resource string      Kubernetes resource type (required)
#   --namespace string     Target namespace (required)
#   --subresource string   Subresource (e.g., exec, log, scale)
#   --name string          Resource name
#   --dir string           Config directory (default "~/.iddio")

# Examples:
iddio runbook test --agent claude-code --method PATCH --resource deployments --namespace payments
# ✓ Matches runbook "restart-deployment"
#   Rule: kubernetes namespaces=[payments, api-gateway] → T1 allow

iddio runbook test --agent claude-code --method DELETE --resource deployments --namespace payments
# ✗ No runbook match
#   Rule: kubernetes namespaces=[payments, api-gateway] → T3 escalate

Iddio classifies every command into one of five risk tiers based on the HTTP method, Kubernetes API path, and resource type. The classification happens in real-time with sub-millisecond latency.

T0 — OBSERVE (0x000)

Read-only operations. Auto-allowed by default.

HTTP Methods: GET, HEAD, OPTIONS

Examples: kubectl get pods · kubectl describe node · kubectl logs deploy/api · kubectl top pods

T1 — OPERATE (0x001)

Pre-approved runbook operations. Auto-allowed when matched to a runbook.

Requests classified as T2, T3, or T4 are downgraded to T1 when they match an operator-defined runbook pattern. The operator controls what happens at T1 via the tiers map in policy.yaml — typically 1: allow for auto-approval.

HTTP Methods: Any method that matches a runbook operation pattern (e.g., PATCH, POST)

Examples: kubectl scale deploy/x --replicas=N · kubectl rollout restart deploy/x · kubectl logs pod/x (via runbook)

T2 — MODIFY (0x010)

Standard write operations. Require human approval by default.

HTTP Methods: POST, PUT, PATCH

Examples: kubectl apply -f manifest.yaml · kubectl create secret · kubectl patch deployment

T3 — SENSITIVE (0x011)

Irreversible or sensitive operations. Require quick operator confirmation.

All DELETE requests, secret reads, and pod evictions are classified here regardless of HTTP method.

Examples: kubectl delete pod/x · kubectl get secret · pod eviction

T4 — BREAK-GLASS (0x100)

Highest-risk operations. Denied by default.

Dangerous subresources (exec, attach, portforward, proxy) and mutations on sensitive resources (RBAC roles/bindings, webhooks, CRDs, namespaces, PVs).

Examples: kubectl exec -it pod -- sh · kubectl delete namespace prod · kubectl port-forward · creating ClusterRoleBindings


Special classification rules

Some operations are reclassified to a higher tier regardless of HTTP method:

OperationDefault TierReclassified To
GET /api/v1/secrets/*T0 (observe)T3 (sensitive)
DELETE any resourceT2 (modify)T3 (sensitive)
POST /api/v1/pods/*/evictionT2 (modify)T3 (sensitive)
POST /api/v1/pods/*/execT2 (modify)T4 (break-glass)
POST /api/v1/pods/*/attachT2 (modify)T4 (break-glass)
POST /api/v1/pods/*/portforwardT2 (modify)T4 (break-glass)
Mutate RBAC roles/bindingsT2 (modify)T4 (break-glass)
Mutate webhooks, CRDs, namespaces, PVsT2 (modify)T4 (break-glass)
!

Reading Secrets is elevated to T3 (sensitive) because Secrets may contain credentials. All DELETEs are elevated to T3 because they are irreversible. Dangerous subresources and privilege-escalation resources are elevated to T4 (break-glass).

Policy rules are defined in YAML and stored at ~/.iddio/policy.yaml. Each agent gets a set of rules scoped by protocol and by namespace, host, or service depending on the protocol.

Policy structure

Kubernetes rules

agents:
  <agent-name>:
    kubernetes:
      - namespaces: ["<namespace-glob>", ...]
        tiers:
          0: allow | deny # T0 OBSERVE
          1: allow | deny | escalate # T1 OPERATE (runbook-matched)
          2: allow | deny | escalate # T2 MODIFY
          3: allow | deny | escalate # T3 SENSITIVE
          4: allow | deny | escalate # T4 BREAK-GLASS
        runbooks: # optional: assigned runbook names
          - <runbook-name>

Multi-protocol rules

Iddio supports rules for multiple protocols. Each protocol has its own scope fields:

agents:
  <agent-name>:
    kubernetes:
      - namespaces: ["payments", "staging-*"]
        tiers:
          0: allow
          2: escalate

    ssh:
      - hosts: ["prod-web-*"]
        tiers:
          0: allow
          2: escalate

    aws:
      - services: ["s3"]
        regions: ["us-east-1"]
        accounts: ["123456789"]
        tiers:
          0: allow
          2: escalate

    terraform:
      - workspaces: ["prod-*"]
        tiers:
          0: allow
          2: deny

    helm:
      - namespaces: ["payments"]
        releases: ["billing-*"]
        tiers:
          0: allow
          2: escalate

Legacy format

For backward compatibility, the Phase 1 format with a top-level rules key is still supported for Kubernetes-only policies:

agents:
  <agent-name>:
    rules:
      - namespaces: ["<namespace-glob>", ...]
        tiers:
          0: allow
          2: escalate

Runbooks

Runbooks define named patterns for pre-approved operations. When a request matches an assigned runbook, its tier is downgraded to T1 (operate), enabling auto-approval without human intervention.

Defining runbooks

Runbooks are defined under a top-level runbooks: key in policy.yaml:

runbooks:
  restart-deployment:
    description: "Restart a deployment by patching its restart annotation"
    operations:
      - methods: [PATCH]
        resources: [deployments]

  scale-deployment:
    description: "Scale a deployment up or down"
    operations:
      - methods: [PATCH, PUT]
        resources: [deployments]
        subresources: [scale]

  debug-pod:
    description: "Exec into a pod for debugging"
    max_tier: 4 # allows downgrading T4 (default max is T3)
    operations:
      - methods: [GET, POST]
        resources: [pods]
        subresources: [log, exec]

Runbook fields

FieldRequiredDefaultDescription
descriptionNo""Human-readable description for audit logs and CLI output
max_tierNo3Highest tier this runbook can downgrade. T4 operations require explicit max_tier: 4
operationsYesList of operation patterns. A request matches if it matches any operation

Operation fields

FieldRequiredDefaultDescription
methodsYesHTTP methods to match (GET, POST, PUT, PATCH, DELETE)
resourcesYesKubernetes resource types (glob patterns)
subresourcesNo[]Subresource filter. Empty = main resource only. Use ["*"] for any
namesNo[]Resource name patterns (glob). Empty = any name
!

An operation with an empty subresources list only matches the main resource — it will not match subresources like exec, log, or scale. This prevents a runbook for PATCH deployments from accidentally covering POST pods/exec.

Assigning runbooks to agents

Runbooks are assigned per policy rule using a runbooks: list. This means runbook access is scoped to the same namespace patterns as the rest of the rule:

agents:
  claude-code:
    kubernetes:
      - namespaces: ["payments", "api-gateway"]
        tiers:
          0: allow
          1: allow # runbook matches: auto-approve
          2: escalate # other writes: require approval
          3: escalate
          4: deny
        runbooks:
          - restart-deployment
          - scale-deployment

The T1 tier decision controls what happens when a runbook matches:

  • 1: allow — runbook matches execute without approval (most common)
  • 1: escalate — runbook matches still require approval, flagged as “known operation” (useful for audit-only mode)
  • 1: deny — runbook matches are denied (useful for temporarily disabling runbook access)

Full example

default: deny

runbooks:
  restart-deployment:
    description: "Restart a deployment"
    operations:
      - methods: [PATCH]
        resources: [deployments]

  scale-deployment:
    description: "Scale a deployment"
    operations:
      - methods: [PATCH, PUT]
        resources: [deployments]
        subresources: [scale]

agents:
  claude-code:
    kubernetes:
      # Production namespaces — read freely, writes need approval, runbooks auto-approve
      - namespaces: ["payments", "api-gateway"]
        tiers:
          0: allow # all reads permitted
          1: allow # runbook-matched operations auto-approved
          2: escalate # writes need human approval
          3: escalate # sensitive ops (deletes, secret reads) need approval
          4: deny # no break-glass operations
        runbooks:
          - restart-deployment
          - scale-deployment

      # Staging namespaces — full read/write access
      - namespaces: ["staging-*"]
        tiers:
          0: allow
          1: allow
          2: allow # writes auto-allowed in staging
          3: escalate # sensitive ops still need approval
          4: escalate # break-glass ops need approval

    ssh:
      - hosts: ["staging-*"]
        tiers:
          0: allow
          2: allow

  cursor-ai:
    kubernetes:
      # Read-only across all namespaces
      - namespaces: ["*"]
        tiers:
          0: allow
          2: deny
          3: deny
          4: deny

Protocol scope fields

Each protocol uses different fields to scope rules:

ProtocolScope FieldsDescription
kubernetesnamespacesKubernetes namespace globs
sshhostsSSH host patterns
awsservices, regions, accountsAWS service, region, and account filters
terraformworkspacesTerraform workspace patterns
helmnamespaces, releasesHelm namespace and release patterns

Namespace and scope globs

Scope fields support glob patterns:

PatternMatches
"payments"Exact match for the payments namespace
"staging-*"Any namespace starting with staging-
"*"All namespaces (wildcard)
"dev-*", "test-*"Multiple patterns in a list

Policy decisions

Each tier in a rule maps to one of three decisions:

DecisionBehavior
allowRequest is forwarded to the cluster immediately
denyRequest is rejected with 403 Forbidden
escalateRequest is held while the operator is prompted for approval

Rule evaluation order

Rules are evaluated in order. The first matching rule wins. If no rule matches, the request is denied by default. This means you should put more specific rules before broad wildcards.

!

If an agent has no rules defined, or no rule matches the request’s namespace and tier, the default action is deny. Always define rules explicitly.

!

Changes to policy.yaml are hot-reloaded automatically. Save the file and the new policy takes effect within a second — no proxy restart needed. If the new YAML is invalid, the proxy logs a warning and keeps the previous working policy.

Each AI agent gets its own kubeconfig that points at the Iddio proxy. The agent uses this kubeconfig instead of a direct cluster kubeconfig. From the agent’s perspective, it’s talking to a normal Kubernetes API server.

Generated kubeconfig

When you run iddio agent add <name>, the following kubeconfig is generated:

apiVersion: v1
kind: Config
current-context: iddio
clusters:
  - cluster:
      server: https://localhost:6443
      certificate-authority-data: <base64-encoded iddio CA cert>
    name: iddio
contexts:
  - context:
      cluster: iddio
      user: agent
    name: iddio
users:
  - name: agent
    user:
      client-certificate-data: <base64-encoded client cert>
      client-key-data: <base64-encoded client key>

When a CA exists (the default after iddio init), the kubeconfig includes mTLS client certificates with a SPIFFE URI SAN. If no CA is available, a bearer token is used instead. The proxy’s --identity-mode flag determines which authentication method is required.

Generated files

Each agent gets a directory under ~/.iddio/agents/<name>/ containing:

FilePurpose
kubeconfigPoints at proxy, includes credentials (0600)
agent.crtClient certificate with SPIFFE URI spiffe://iddio.local/agent/<name> (0600)
agent.keyClient private key (0600)

Using with AI agents

Point your agent at the generated kubeconfig:

# Set KUBECONFIG before starting Claude Code
export KUBECONFIG=~/.iddio/agents/claude-code/kubeconfig

# Claude Code will use this kubeconfig for all kubectl commands
# All commands are transparently proxied through Iddio
# Same pattern for any agent that runs kubectl
export KUBECONFIG=~/.iddio/agents/<agent-name>/kubeconfig

# Or pass it explicitly
kubectl --kubeconfig ~/.iddio/agents/<agent-name>/kubeconfig get pods

Token management

Tokens are stored in ~/.iddio/tokens.yaml and mapped to agent names. Changes to this file are hot-reloaded automatically — new agents are recognized within a second without restarting the proxy.

iddio_a1b2c3d4e5f6...: claude-code
iddio_f6e5d4c3b2a1...: cursor-ai
!

Token files have 0600 permissions. Do not share tokens between agents — each agent should have its own identity for proper audit trails and policy enforcement.

Every request through Iddio is logged to an append-only, hash-chained JSONL file at ~/.iddio/audit.jsonl. The logger is mutex-protected for concurrent access safety.

Log entry format

Each entry is a single JSON line with a SHA-256 hash chain for tamper detection:

{
  "ts": "2026-01-15T14:32:01.003Z",
  "agent": "claude-code",
  "protocol": "kubernetes",
  "method": "GET",
  "path": "/api/v1/namespaces/payments/pods",
  "tier": 0,
  "resource": "pods",
  "namespace": "payments",
  "decision": "allow",
  "status": 200,
  "latency": "312.4µs",
  "prev_hash": "a1b2c3d4e5f6...",
  "hash": "f6e5d4c3b2a1..."
}

An escalated request includes the decision reflecting the approval:

{
  "ts": "2026-01-15T14:32:45.112Z",
  "agent": "claude-code",
  "protocol": "kubernetes",
  "method": "DELETE",
  "path": "/api/v1/namespaces/payments/pods/api-7d4b",
  "tier": 3,
  "resource": "pods",
  "namespace": "payments",
  "decision": "approved-by-operator",
  "status": 200,
  "latency": "12.453s",
  "prev_hash": "f6e5d4c3b2a1...",
  "hash": "c3d4e5f6a1b2..."
}

A runbook-matched request includes a runbook field and shows tier: 1:

{
  "ts": "2026-01-15T14:33:12.200Z",
  "agent": "claude-code",
  "protocol": "kubernetes",
  "method": "PATCH",
  "path": "/apis/apps/v1/namespaces/payments/deployments/api",
  "tier": 1,
  "resource": "deployments",
  "namespace": "payments",
  "runbook": "restart-deployment",
  "decision": "allow",
  "status": 200,
  "latency": "18.7ms",
  "prev_hash": "f6e5d4c3b2a1...",
  "hash": "b2c3d4e5f6a1..."
}

An exec/attach session includes a session_id linking to the session recording:

{
  "ts": "2026-01-15T14:34:01.500Z",
  "agent": "claude-code",
  "protocol": "kubernetes",
  "method": "POST",
  "path": "/api/v1/namespaces/payments/pods/api-7d4b/exec",
  "tier": 4,
  "resource": "pods",
  "namespace": "payments",
  "decision": "approved-by-operator",
  "session_id": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4",
  "status": 101,
  "latency": "45.2s",
  "prev_hash": "c3d4e5f6a1b2...",
  "hash": "d4e5f6a1b2c3..."
}

The status: 101 indicates a successful connection upgrade (HTTP 101 Switching Protocols). Use the session_id to find the corresponding session recording — see the Session Recording section for details.

Hash chain integrity

Every audit entry includes a SHA-256 hash of the entry and a prev_hash linking to the previous entry. This creates a tamper-evident chain — if any entry is modified or deleted, the chain breaks.

Verify the chain at any time:

iddio audit verify

Querying logs

# Last 20 entries
iddio logs --last 20

# Stream new entries in real-time
iddio logs -f

# For advanced filtering, use jq on the raw JSONL:
jq 'select(.agent == "claude-code")' ~/.iddio/audit.jsonl
jq 'select(.decision == "deny")' ~/.iddio/audit.jsonl
jq 'select(.tier >= 2)' ~/.iddio/audit.jsonl
jq 'select(.runbook != null)' ~/.iddio/audit.jsonl
jq 'select(.session_id != null)' ~/.iddio/audit.jsonl

Log entry properties

PropertyTypeDescription
tsstringISO 8601 UTC timestamp
agentstringAgent name from identity lookup
protocolstringProtocol (e.g., kubernetes)
methodstringHTTP method (GET, POST, DELETE, etc.)
pathstringKubernetes API path
tiernumberClassification tier (0, 1, 2, 3, 4)
resourcestringKubernetes resource type (e.g., pods)
namespacestringTarget namespace
decisionstringallow, deny, approved-by-operator, denied-by-operator
runbookstringMatched runbook name (present when a runbook matched)
session_idstringSession recording ID (exec/attach only)
statusnumberHTTP response status code
latencystringTotal request latency as a Go duration string
prev_hashstringSHA-256 hash of previous audit entry
hashstringSHA-256 hash of this audit entry
!

The audit log file has 0600 permissions and is append-only. Log entries are hash-chained (SHA-256) for tamper detection. Run iddio audit verify to check integrity.

Iddio’s security model is built on one principle: agents never get real cluster credentials. The proxy holds the operator’s credentials and uses them on behalf of the agent, after classification and policy evaluation.

Key security properties

  • No direct cluster access — Agents authenticate to Iddio, not to the cluster. They never see or hold cluster credentials.
  • Transparent to agents — kubectl doesn’t know Iddio exists. The proxy speaks the Kubernetes API natively. From the agent’s view, the cluster is just sometimes slower for writes.
  • Blocking approval — Write commands hold the HTTP connection open until approved or denied. No polling, no request IDs, no retries needed by the agent. Approval can be via terminal prompt or webhook (e.g., Slack).
  • Multiple identity modes — Bearer tokens with constant-time comparison, mTLS with SPIFFE URI SANs, or hybrid (mTLS preferred, token fallback).
  • JIT credentials — In JIT mode, each forwarded request gets a short-lived ServiceAccount token (default 5-minute TTL) instead of long-lived operator credentials.
  • Hash-chained audit — Every audit entry includes a SHA-256 hash linked to the previous entry, creating a tamper-evident chain. Verify with iddio audit verify.
  • Session recording — Exec/attach sessions are recorded to JSONL files with base64-encoded I/O events and linked to the audit log via session_id.
  • Terminal sanitization — Approval prompts sanitize agent-controlled input via sanitizeTerminal() to prevent ANSI injection attacks.

Identity modes

ModeFlagHow it works
Token--identity-mode tokenBearer token in Authorization header, constant-time comparison
mTLS--identity-mode mtlsSPIFFE URI extracted from client certificate SAN
Hybrid--identity-mode hybridmTLS preferred, falls back to token if no client cert

Credential modes

ModeFlagHow it works
Kubeconfig--credential-mode kubeconfigUses operator’s kubeconfig transport (long-lived)
JIT--credential-mode jitCreates a Just-In-Time ServiceAccount token per request (default 5m TTL, auto-expires)

TLS configuration

Iddio generates a CA and self-signed ECDSA P-256 TLS certificates at initialization. Agent kubeconfigs include the CA certificate so kubectl trusts the proxy. Agent client certs include SPIFFE URI SANs for mTLS identity.

~/.iddio/
├── ca.crt              # CA certificate
├── ca.key              # CA private key (0600)
├── tls/
│   ├── server.crt      # Server TLS certificate (signed by CA)
│   ├── server.key      # Server private key (0600)
│   └── ca.pem          # CA cert PEM for embedding in agent kubeconfigs
└── agents/<name>/
    ├── agent.crt       # Client cert with SPIFFE URI SAN (0600)
    └── agent.key       # Client private key (0600)

File permissions

All sensitive files are created with 0600 permissions (owner read/write only):

FilePermissionsDescription
~/.iddio/ca.key0600CA private key
~/.iddio/tls/server.key0600TLS private key
~/.iddio/tokens.yaml0600Bearer token store
~/.iddio/policy.yaml0600Access policy rules
~/.iddio/agents/*/kubeconfig0600Agent kubeconfigs
~/.iddio/agents/*/agent.key0600Agent private keys
~/.iddio/audit.jsonl0600Audit log
~/.iddio/sessions/0700Session recordings directory
~/.iddio/sessions/*.jsonl0600Individual session recordings

Threat model

With static credentials (--credential-mode kubeconfig): If an agent is compromised, the attacker’s requests are still proxied through Iddio and subject to classification and policy. However, approved requests use the operator’s long-lived kubeconfig. Policy enforcement at the proxy layer limits blast radius.

With JIT credentials (--credential-mode jit): If an agent is compromised, approved requests get a 5-minute JIT token scoped to the operation. Auto-expires, minimal blast radius. Unapproved requests are still blocked by policy.

In both modes, all requests are logged to the hash-chained audit trail, and exec/attach sessions are recorded for forensic replay.

All Iddio state lives in the ~/.iddio/ directory. No external database, no running daemon, no state outside this folder.

Directory layout

~/.iddio/
├── ca.crt                   # CA certificate (signs server + agent certs)
├── ca.key                   # CA private key (0600)
├── tokens.yaml              # Bearer token → agent name mapping (0600, hot-reloaded)
├── policy.yaml              # Per-agent access rules (0600, hot-reloaded)
├── tls/
│   ├── server.crt           # Server TLS cert (signed by CA)
│   ├── server.key           # TLS private key (0600)
│   └── ca.pem               # CA cert PEM for agent kubeconfigs
├── agents/
│   ├── claude-code/
│   │   ├── kubeconfig       # Points at proxy, includes credentials (0600)
│   │   ├── agent.crt        # Client cert with SPIFFE URI SAN (0600)
│   │   └── agent.key        # Client private key (0600)
│   └── cursor-ai/
│       ├── kubeconfig
│       ├── agent.crt
│       └── agent.key
├── sessions/                # Exec/attach session recordings (dir 0700)
│   └── <session-id>.jsonl   # Per-session JSONL: metadata + base64 events (0600)
└── audit.jsonl              # Append-only hash-chained audit log (0600)

tokens.yaml

Maps bearer tokens to agent identities. Generated automatically by iddio agent add. Hot-reloaded by the proxy — changes take effect without restart.

# Bearer token → agent name mapping
# Generated automatically by `iddio agent add`
iddio_a1b2c3d4e5f6...: claude-code
iddio_f6e5d4c3b2a1...: cursor-ai

policy.yaml

Defines per-agent access rules scoped by protocol. Hot-reloaded by the proxy — changes take effect without restart. See the Policy Configuration section for full documentation.

Runbook schema

Runbooks are defined under a top-level runbooks: key:

runbooks:
  <runbook-name>:
    description: "<string>" # optional, human-readable description
    max_tier: <number> # optional, default 3 (set to 4 to cover T4 ops)
    operations: # required, at least one
      - methods: [<HTTP methods>] # required: GET, POST, PUT, PATCH, DELETE
        resources: [<resource globs>] # required: e.g., deployments, pods
        subresources: [<subresources>] # optional: e.g., exec, log, scale
        names: [<name globs>] # optional: resource name patterns

Runbooks are assigned to agents within Kubernetes policy rules via the runbooks: list:

agents:
  <agent-name>:
    kubernetes:
      - namespaces: ["<namespace-glob>"]
        runbooks: [<runbook-name>, ...]
        tiers:
          0: allow
          1: allow # T1 controls runbook-matched requests
          2: escalate

Environment variables

VariableDefaultDescription
KUBECONFIG~/.kube/configOperator kubeconfig for cluster access

Start flags

All iddio start flags can also be set via the command line. See the CLI Reference for the full list.

FlagDefaultDescription
--cluster-url(required)Real Kubernetes API server URL
--listen:6443Proxy listen address
--kubeconfig~/.kube/configKubeconfig for the real cluster
--dir~/.iddioConfig directory
--identity-modehybridAgent identity: token, mtls, or hybrid
--approval-modeterminalApproval: terminal or webhook
--credential-modekubeconfigCredentials: kubeconfig or jit
--in-clusterfalseUse in-cluster Kubernetes config
--admin-listen:8443Admin API listen address
--jit-ttl5mTTL for JIT tokens

Common issues and their solutions.

kubectl hangs indefinitely on a command

The command was classified as T2 (modify), T3 (sensitive), or T4 (break-glass) and is waiting for approval. Check the Iddio terminal — you should see an approval prompt. Type y to approve or n to deny.

Error: “unknown agent” or 401 Unauthorized

The bearer token in the agent’s kubeconfig doesn’t match any entry in tokens.yaml. Re-run iddio agent add <name> to regenerate credentials, then update the agent’s KUBECONFIG.

Error: “no matching rule” or 403 Forbidden

The agent has no policy rule matching the target namespace and tier. Add a rule to policy.yaml for the agent. Remember: no matching rule = deny by default.

Error: agent not recognized despite correct kubeconfig

If you added an agent with iddio agent add and the proxy rejects requests with 401, check the --identity-mode flag. The default mode is hybrid, which accepts both mTLS certificates and bearer tokens. If you explicitly changed to --identity-mode token, the proxy will only check bearer tokens and reject the mTLS client certificates generated by iddio agent add. Switch back to hybrid or mTLS mode:

iddio start --cluster-url https://your-cluster:6443 --identity-mode hybrid

Policy changes not taking effect

Changes to policy.yaml and tokens.yaml are hot-reloaded automatically within about a second. If changes still aren’t reflected, check the proxy logs for YAML parse errors — if the file is invalid, the proxy keeps the previous working config and logs a warning. Fix the YAML syntax and save again.

TLS certificate errors from kubectl

The agent kubeconfig includes Iddio’s self-signed CA certificate. If you regenerated TLS certs (via iddio init), you need to re-add all agents to get updated kubeconfigs.

Proxy won’t start: “address already in use”

Another process is using port 6443 (the default). Either stop it, or start Iddio on a different port with --listen :6444. Remember to update agent kubeconfigs to point at the new port.

Commands are slow even for reads

Check the latency to your real cluster. Iddio adds sub-millisecond overhead for classification and policy evaluation. If reads are slow, the bottleneck is the cluster itself.


Still stuck? Check the Security Disclosure page for how to report issues, or reach out via the Pricing page to talk to engineering.

When an AI agent runs kubectl exec, kubectl attach, or kubectl port-forward through iddio, the proxy intercepts the HTTP upgrade, hijacks the connection, and records every byte that flows between the agent and the container. This creates a forensic-quality audit trail of interactive sessions.

How it works

Exec and attach requests use HTTP connection upgrades (SPDY/3.1 or WebSocket). Normal reverse proxying doesn’t work for these — the proxy must hijack the raw TCP connection and relay the bidirectional stream itself.

01
DETECT

The proxy checks for Connection: Upgrade headers. Exec, attach, and port-forward requests are routed to the session handler instead of the normal reverse proxy path.

02
RECORD

A SessionRecorder creates a JSONL file in ~/.iddio/sessions/. The first line is metadata (agent, pod, namespace, command). Every chunk of data — stdin from the agent, stdout from the container — is base64-encoded and appended as an event line.

03
RELAY

Two goroutines copy data bidirectionally between the agent and the real cluster, recording each chunk before forwarding it. Any buffered data from the HTTP handshake is drained first to prevent data loss.

04
LINK

The audit log entry for the exec request includes a session_id field that links to the session recording file. This connects the high-level audit trail to the detailed session capture.

Session file format

Each session is stored as a JSONL file in ~/.iddio/sessions/<session-id>.jsonl. The file has three types of lines:

Opening metadata

{
  "type": "session_start",
  "session_id": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4",
  "agent": "claude-code",
  "namespace": "payments",
  "pod": "api-7f8d9",
  "container": "app",
  "command": ["/bin/sh", "-c", "ls -la"],
  "subresource": "exec",
  "started_at": "2026-02-08T14:32:00.123Z"
}

Data events

{
  "type": "session_event",
  "ts": "2026-02-08T14:32:00.456Z",
  "channel": "stdout",
  "data": "dG90YWwgNDIKZHJ3eHIteHIteA==",
  "len": 24
}

The channel field is "stdin" for data sent by the agent or "stdout" for data received from the container. The data field is base64-encoded.

Closing metadata

{
  "type": "session_end",
  "session_id": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4",
  "agent": "claude-code",
  "namespace": "payments",
  "pod": "api-7f8d9",
  "ended_at": "2026-02-08T14:34:31.789Z",
  "bytes_in": 1234,
  "bytes_out": 45678
}

CLI commands

List sessions

iddio sessions list

# Output:
# SESSION ID                       AGENT        POD                COMMAND          STARTED              DURATION  IN     OUT
# a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4 claude-code  payments/api-7f8d9 ["/bin/sh","-c"] 2026-02-08 14:32     2m31s     1.2KB  45KB

Replay a session

# Replay with original timing
iddio sessions replay a1b2c3d4e5f6

# Replay at 5x speed
iddio sessions replay a1b2c3d4e5f6 --speed 5

Replay decodes the base64 data, computes the delay between consecutive timestamps, and prints the output with the original timing preserved. All output is sanitized to prevent ANSI escape sequence injection.

Inspect a session

iddio sessions inspect a1b2c3d4e5f6

Dumps the raw session events as formatted JSON for machine consumption or debugging.

Audit log linkage

Exec requests appear in the audit log with a session_id field:

{
  "ts": "2026-02-08T14:32:00.123Z",
  "agent": "claude-code",
  "method": "POST",
  "path": "/api/v1/namespaces/payments/pods/api-7f8d9/exec",
  "tier": 4,
  "resource": "pods",
  "namespace": "payments",
  "decision": "allow",
  "status": 101,
  "session_id": "a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4",
  "latency": "2m31s",
  "prev_hash": "a3f1c8...d94e",
  "hash": "7b2e4a...f031"
}

The status: 101 indicates a successful connection upgrade (HTTP 101 Switching Protocols). Use the session_id to find the corresponding session file.

Security

  • Session files have 0600 permissions (owner read/write only)
  • The sessions directory has 0700 permissions
  • Replay output is sanitized via sanitizeOutput() to strip OSC, DCS, and APC escape sequences
  • Session IDs are 128-bit cryptographically random hex strings
!

Exec, attach, and port-forward operations are classified as T4 (break-glass). They require explicit 4: allow or 4: escalate in the agent’s policy before the session can be established.

When a policy rule returns escalate, Iddio blocks the HTTP connection and waits for a human operator to approve or deny the request. Two approval modes are available.

Terminal mode

The default mode. Escalated requests appear as interactive prompts on the operator’s terminal.

bin/iddio start --cluster-url https://... --approval-mode terminal

How it works

When a request is escalated, the operator sees:

ESCALATE  [deploy-bot]  DELETE default/pods
   tier 3 (sensitive) — approve? [y/N] (60s timeout):
  • Type y or yes to approve — the request is forwarded to the cluster
  • Type n, no, or press Enter to deny — the request returns 403
  • If no input is received within 60 seconds, the request is automatically denied
!

Terminal mode is serialized — only one approval prompt can be active at a time. Concurrent escalations queue behind a mutex. This is acceptable for single-operator setups but not for production multi-agent environments. Use webhook mode for production.

Security

Agent names, namespace paths, and resource names in the approval prompt are sanitized to strip ANSI escape sequences and control characters. This prevents an attacker from crafting malicious Kubernetes resource names that spoof the approval prompt.

Webhook mode

Escalated requests are sent to an external HTTP endpoint. The operator approves or denies through an external system (e.g., Slack, PagerDuty, a custom dashboard).

bin/iddio start \
  --cluster-url https://... \
  --approval-mode webhook \
  --webhook-url https://your-endpoint/approvals \
  --webhook-secret your-hmac-secret \
  --callback-url https://iddio.example.com:8443

How it works

01
REQUEST

Iddio generates a unique approval ID (128-bit random hex) and POSTs a JSON payload to your webhook URL.

02
BLOCK

The HTTP connection from the agent blocks, waiting for a callback from the external system.

03
DECIDE

The external system displays the approval request. The operator clicks approve or deny.

04
RESUME

The external system POSTs back to Iddio’s callback URL. Iddio resumes the blocked connection — forwarding the request or returning 403.

Webhook payload

{
  "id": "a1b2c3d4e5f6...",
  "agent": "deploy-bot",
  "method": "DELETE",
  "path": "/api/v1/namespaces/default/pods/my-pod",
  "tier": 3,
  "verb": "sensitive",
  "resource": "pods",
  "namespace": "default",
  "timestamp": "2025-01-15T10:30:00Z",
  "expires": "2025-01-15T10:35:00Z",
  "callback_url": "https://iddio.example.com:8443/callbacks/approvals/a1b2c3d4e5f6...",
  "approve_url": "https://iddio.example.com:8443/callbacks/approvals/a1b2c3d4e5f6.../approve",
  "deny_url": "https://iddio.example.com:8443/callbacks/approvals/a1b2c3d4e5f6.../deny"
}

HMAC signature

Every webhook payload is signed with HMAC-SHA256 using the configured --webhook-secret. The signature is sent in the X-Iddio-Signature header as a hex-encoded string.

To verify:

import hmac, hashlib
expected = hmac.new(secret.encode(), body, hashlib.sha256).hexdigest()
assert hmac.compare_digest(request.headers["X-Iddio-Signature"], expected)
!

HMAC verification is mandatory on callback endpoints. If a webhook secret is not provided via --webhook-secret, Iddio auto-generates a 32-byte hex secret at startup.

Callback endpoints

Iddio exposes callback endpoints on the admin listener (--admin-listen, default :8443):

EndpointMethodEffect
/callbacks/approvals/{id}/approvePOSTApprove the pending request
/callbacks/approvals/{id}/denyPOSTDeny the pending request
/healthzGETHealth check (returns ok)

If an approval has already expired or been resolved, the callback returns 410 Gone.

Timeout

If no callback arrives before the configured timeout, the request is automatically denied.

Slack integration

Iddio includes a ParseSlackCallback helper for parsing Slack interactive message callbacks. Slack sends button clicks as application/x-www-form-urlencoded with a payload JSON field.

Workflow

  1. Configure your webhook URL to point at a Slack bot or incoming webhook
  2. The bot formats the Iddio webhook payload as a Slack message with Approve/Deny buttons
  3. When the operator clicks a button, Slack sends a callback to your bot
  4. Your bot calls ParseSlackCallback to extract the approval ID and action
  5. Your bot POSTs to Iddio’s callback URL

Configuration flags

FlagDefaultDescription
--approval-modeterminalterminal or webhook
--webhook-urlURL to POST approval requests to
--webhook-secretHMAC-SHA256 signing secret (auto-generated if empty)
--callback-urlBase URL for approval callbacks
--admin-listen:8443Admin HTTP listener address

Iddio can use Just-In-Time (JIT) credentials instead of a static kubeconfig to authenticate to the upstream Kubernetes cluster. JIT mode mints short-lived ServiceAccount tokens via the Kubernetes TokenRequest API, reducing the window of exposure if a token is intercepted.

Credential modes

ModeFlag valueHow it works
Static kubeconfigkubeconfig (default)Uses the operator’s kubeconfig to authenticate to the cluster
JIT tokensjitMints short-lived tokens via the TokenRequest API
# Static mode (default)
bin/iddio start --cluster-url https://... --credential-mode kubeconfig

# JIT mode
bin/iddio start --cluster-url https://... --credential-mode jit --in-cluster
!

JIT mode requires --in-cluster because it uses the pod’s service account to call the TokenRequest API. This means Iddio must be running as a pod inside the Kubernetes cluster.

How JIT works

01
INIT

Iddio runs inside the cluster as a pod with a ServiceAccount that has RBAC permission to create tokens (serviceaccounts/token create).

02
MINT

On the first proxied request, Iddio calls the TokenRequest API to mint a short-lived token with the configured TTL.

03
CACHE

The token is cached in memory and reused for subsequent requests. Within 30 seconds of expiry, the next request triggers a new token mint.

04
INJECT

The fresh token is injected into the Authorization: Bearer header of the upstream request before it reaches the Kubernetes API server.

Configuration flags

FlagDefaultDescription
--credential-modekubeconfigkubeconfig or jit
--jit-namespaceiddio-systemNamespace of the Iddio ServiceAccount
--jit-service-accountiddio-proxyName of the ServiceAccount
--jit-ttl5mToken TTL (duration string, e.g., 5m, 10m, 1h)
--in-clusterfalseUse in-cluster config for the Kubernetes client

RBAC requirements

JIT mode requires a ClusterRole with permission to create ServiceAccount tokens:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: iddio-proxy
rules:
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    verbs: ["create"]

The Helm chart creates this ClusterRole automatically when rbac.create: true (default).

Exec/attach sessions

For exec/attach sessions, the connection is hijacked from the standard HTTP proxy path and routed through a raw TCP/TLS tunnel. The JIT credential source implements a CredentialInjector interface that injects a fresh token directly into the raw HTTP request headers before writing them to the upstream connection.

type CredentialInjector interface {
    InjectCredentials(r *http.Request) error
}

Static credentials do not implement this interface — the static transport handles authentication at the RoundTripper level, which works for proxied requests but not for raw connections.

Security properties

PropertyStatic kubeconfigJIT tokens
Token lifetimeLong-lived (kubeconfig expiry)Short-lived (default 5 min)
Exposure windowUntil kubeconfig is rotatedMaximum of TTL duration
Requires in-clusterNoYes
RBAC footprintOperator’s permissionsMinimal (token create only)
Token storageOn disk (kubeconfig file)In memory only
!

JIT mode is recommended for production Kubernetes deployments where Iddio runs as a pod inside the cluster. The minimal RBAC footprint and short-lived tokens significantly reduce the blast radius of a credential compromise.

Iddio can be run locally as a desktop app or CLI proxy, or deployed to Kubernetes via Helm chart.

Desktop app (macOS)

A native macOS app with a visual policy editor, approval dialogs, session viewer, and built-in terminal.

Homebrew Cask:

brew install --cask leonardaustin/tap/iddio-desktop

DMG download — grab the latest .dmg from GitHub Releases (look for desktop-v* tags), open it, and drag Iddio Desktop to Applications.

The desktop app handles initialization and proxy management through the GUI.

CLI proxy

A single binary you run in a terminal. Lightweight, scriptable, works on macOS and Linux.

Homebrew:

brew install leonardaustin/tap/iddio

Then initialize and start the proxy:

iddio init
iddio agent add my-agent
# Edit ~/.iddio/policy.yaml
iddio start --cluster-url https://your-k8s-api:6443

The proxy listens on :6443 with terminal approval mode.

Kubernetes (Helm)

The Helm chart deploys Iddio as a Deployment inside the cluster with in-cluster credentials, mTLS identity, JIT tokens, and webhook approval. Contact us for container images and Helm chart access.

Install

helm install iddio deploy/helm/iddio/ \
  --namespace iddio-system --create-namespace \
  --set image.repository=your-registry/iddio \
  --set approval.mode=webhook \
  --set approval.webhookURL=https://your-webhook-endpoint/approvals \
  --set approval.callbackURL=https://iddio.example.com:8443

TLS certificates

The chart expects a Kubernetes Secret containing the CA and server certificates:

kubectl create secret generic iddio-tls \
  --namespace iddio-system \
  --from-file=ca.crt \
  --from-file=ca.key \
  --from-file=server.crt \
  --from-file=server.key

Or let iddio init generate them and create the secret from the output files.

Helm values reference

ValueDefaultDescription
image.repositoryiddioContainer image repository
image.tag0.2.0Image tag
proxy.listenPort6443Port the proxy listens on
proxy.adminPort8443Port for admin/callback API
proxy.jit.ttl5mJIT token TTL
proxy.jit.namespaceiddio-systemNamespace for JIT ServiceAccount
proxy.jit.serviceAccountiddio-proxyServiceAccount name
approval.modewebhookApproval mode (terminal or webhook)
approval.webhookURLWebhook URL for approval requests
approval.webhookSecretHMAC signing secret
approval.callbackURLCallback base URL
tls.secretNameiddio-tlsName of the TLS Secret
service.typeClusterIPKubernetes Service type
rbac.createtrueCreate RBAC resources
rbac.useClusterAdminfalseUse cluster-admin ClusterRole

What the chart creates

ResourcePurpose
DeploymentRuns the Iddio proxy pod
Service (ClusterIP)Exposes the proxy on port 6443
ServiceAccountIdentity for the Iddio pod
ClusterRoleRBAC permissions for JIT token creation
ClusterRoleBindingBinds the ClusterRole to the ServiceAccount
ConfigMapPolicy configuration
SecretTLS certificates and webhook secret

Exposing the proxy

The default Service type is ClusterIP, which means the proxy is only reachable from inside the cluster. To expose it externally:

  • Use service.type: LoadBalancer for cloud environments
  • Use an Ingress resource with TLS passthrough
  • Use a NodePort service for on-premises clusters
!

Agents outside the cluster need network access to the proxy’s Service IP and port 6443. Ensure your firewall rules and network policies allow this traffic.

Health checks

The admin API exposes a /healthz endpoint on the admin port (default 8443) that returns 200 OK.

Iddio’s policy engine supports rules scoped to multiple infrastructure protocols beyond Kubernetes. This allows a single policy file to govern agent access across different systems.

Supported protocols

ProtocolDescriptionScope fields
kubernetesKubernetes API operationsnamespaces
sshSSH connections to hostshosts
awsAWS API callsservices, regions (optional), accounts (optional)
terraformTerraform workspace operationsworkspaces
helmHelm release operationsnamespaces, releases (optional)

Policy format

Multi-protocol policies use protocol-specific sections instead of the rules: key:

default: deny

agents:
  infra-bot:
    kubernetes:
      - namespaces: ["default", "staging-*"]
        tiers:
          0: allow
          2: allow

    ssh:
      - hosts: ["prod-web-*"]
        tiers:
          0: allow
          2: escalate

    aws:
      - services: ["s3", "dynamodb"]
        regions: ["us-east-1"]
        tiers:
          0: allow
          2: escalate

    terraform:
      - workspaces: ["staging"]
        tiers:
          0: allow
          2: allow
      - workspaces: ["production"]
        tiers:
          0: allow
          2: escalate

    helm:
      - namespaces: ["default"]
        releases: ["my-app"]
        tiers:
          0: allow
          2: escalate
!

The legacy rules: key is treated as kubernetes: for backward compatibility. Existing policy files continue to work without changes.

Scope fields

Required vs. optional scope fields

Each protocol has one required scope field and zero or more optional scope fields:

ProtocolRequiredOptional
kubernetesnamespaces
sshhosts
awsservicesregions, accounts
terraformworkspaces
helmnamespacesreleases

Required fields: The request must match at least one pattern in the list. A rule with an empty required field will never match.

Optional fields: If not specified, the field is unconstrained (matches everything). If specified, the request must match at least one pattern.

Glob patterns

All scope fields support glob patterns using Go’s filepath.Match syntax:

PatternMatches
*Everything
prod-web-*prod-web-1, prod-web-us-east, etc.
team-?team-a, team-b (single character wildcard)
us-east-1Exact match only

Protocol-specific details

Kubernetes

Scope is based on the namespace extracted from the API path.

kubernetes:
  - namespaces: ["payments", "api-gateway"]
    tiers:
      0: allow
      2: escalate

Cluster-scoped resources (e.g., clusterroles, namespaces) have an empty namespace. Use "*" to match cluster-scoped resources.

SSH

SSH rules match against the target hostname.

ssh:
  - hosts: ["prod-web-*"]
    tiers:
      0: allow
      2: escalate

AWS

AWS rules match against the AWS service name, with optional region and account constraints.

aws:
  - services: ["s3", "dynamodb"]
    regions: ["us-east-1", "us-west-2"]
    accounts: ["123456789012"]
    tiers:
      0: allow
      2: escalate

If regions or accounts are omitted, the rule matches all regions/accounts for the specified services.

Terraform

Terraform rules match against workspace names.

terraform:
  - workspaces: ["staging"]
    tiers:
      0: allow
      2: allow
  - workspaces: ["production"]
    tiers:
      0: allow
      2: escalate

Helm

Helm rules match against the Kubernetes namespace and optionally the Helm release name.

helm:
  - namespaces: ["default", "staging-*"]
    releases: ["my-app", "monitoring-*"]
    tiers:
      0: allow
      2: escalate

If releases is omitted, the rule matches all releases in the specified namespaces.

Classification integration

The Classification struct includes protocol-specific fields:

FieldUsed by
ProtocolAll — determines which protocol rules to evaluate
Namespacekubernetes, helm
Labels["host"]ssh
Labels["service"]aws
Labels["region"]aws
Labels["account"]aws
Labels["workspace"]terraform
Labels["release"]helm

Validation

The policy validator checks for:

  • Rules with empty required scope fields (will never match)
  • Scope fields used in the wrong protocol section (e.g., hosts on a kubernetes rule)
  • Unknown protocol names
  • Invalid glob patterns
  • Invalid decision strings
!

Validation issues produce warnings at startup but do not prevent the proxy from starting. Review warnings carefully to ensure your policy rules match as intended.

The enterprise control plane is a PostgreSQL-backed HTTP server that manages fleets of proxies across multiple clusters. It adds OIDC-authenticated operator access, four-role RBAC, ETag-based config sync, and centralized audit aggregation — while remaining fully compatible with the open-source proxy.

Deployment Topologies

Three topologies are supported:

Self-hosted embedded — single binary, single process. Run iddio-server serve --embedded-proxy to start both the control plane and proxy together. The proxy reads policy from the database directly via DBPolicySource and writes audit events to the database via DBAuditWriter. No inter-process communication required.

Managed — proxy deployed separately with --control-plane <url>. The proxy polls the control plane for config changes (ETag-based), forwards audit events in batches, and sends heartbeats every 30 seconds. The control plane manages agent identities, policies, and approval routing.

Self-hosted separate — independent proxy and server deployments, connected the same way as managed mode but both running in your own infrastructure.

Starting the Control Plane

# Run migrations first
iddio-server migrate --database-url postgres://user:pass@host/iddio

# Seed the initial organization
iddio-server seed --database-url postgres://user:pass@host/iddio --tenant-slug myorg

# Start the server
iddio-server serve \
  --database-url postgres://user:pass@host/iddio \
  --listen :8080 \
  --tenant-slug myorg \
  --signing-key $(openssl rand -hex 32)

For embedded mode with a local cluster:

iddio-server serve \
  --database-url postgres://user:pass@host/iddio \
  --embedded-proxy \
  --cluster-url https://kubernetes.default.svc \
  --in-cluster \
  --proxy-listen :6443 \
  --tenant-slug myorg

OIDC Authentication

Operators authenticate via any OIDC-compliant identity provider — Google Workspace, Okta, Azure AD (Entra ID), Auth0, Keycloak, or any provider that supports OpenID Connect Discovery.

1. Create an OAuth2 / OIDC Application

In your identity provider’s admin console, create a new application (sometimes called a “client” or “app registration”) with the following settings:

SettingValue
Application typeWeb application
Grant typeAuthorization Code
Authorized redirect URIhttps://<your-iddio-domain>/api/v1/auth/callback
Scopesopenid, email, profile

The redirect URI must match exactly — including the scheme, domain, port (if non-standard), and path. For example, if your server runs at https://iddio.acme.com, the redirect URI is:

https://iddio.acme.com/api/v1/auth/callback

After creating the application, note the Client ID and Client Secret.

2. Find Your Issuer URL

The issuer URL is the base URL of your identity provider’s OIDC discovery document. Iddio fetches <issuer>/.well-known/openid-configuration automatically to discover authorization and token endpoints.

Common issuer URLs:

ProviderIssuer URL
Google Workspacehttps://accounts.google.com
Oktahttps://your-org.okta.com
Azure AD (Entra ID)https://login.microsoftonline.com/<tenant-id>/v2.0
Auth0https://your-tenant.auth0.com/
Keycloakhttps://keycloak.example.com/realms/your-realm

3. Configure the Server

Pass the OIDC settings as CLI flags or environment variables when starting the server:

iddio-server serve \
  --database-url postgres://user:pass@host/iddio \
  --tenant-slug myorg \
  --oidc-issuer https://accounts.google.com \
  --oidc-client-id your-client-id \
  --oidc-client-secret your-client-secret \
  --oidc-redirect-url https://iddio.acme.com/api/v1/auth/callback

Or as environment variables:

OIDC_ISSUER=https://accounts.google.com
OIDC_CLIENT_ID=your-client-id
OIDC_CLIENT_SECRET=your-client-secret
OIDC_REDIRECT_URL=https://iddio.acme.com/api/v1/auth/callback

4. Verify

Visit your Iddio server in a browser and click SSO Login. You should be redirected to your identity provider, authenticate, and land on the dashboard.

If you see an error like redirect_uri_mismatch, double-check that the redirect URI in your identity provider’s configuration matches the --oidc-redirect-url value exactly.

Desktop App

The desktop app connects to the control plane by entering just the domain (e.g. acme.iddio.dev) in the enterprise settings panel. It performs OIDC login via the server — no additional identity provider configuration is needed beyond what was set up above.

Session Lifecycle

After login, the server issues a short-lived JWT (HMAC-SHA256) and sets a session cookie. The JWT contains the operator email and role, and is verified on every API request.

RBAC Roles

Four roles control what operators can do:

RoleCapabilities
adminFull access: manage clusters, agents, policies, operators, and approvals
operatorManage clusters and agents; view audit and sessions; act on approvals
approverView policy and audit; approve or deny escalation requests
viewerRead-only access to all data

Roles are assigned per operator and enforced by RequireRole middleware on each API endpoint.

Proxy Config Sync

Proxies in managed mode poll the control plane for configuration changes:

iddio start \
  --control-plane https://api.myorg.iddio.dev \
  --proxy-cert ~/.iddio/agents/my-proxy/agent.crt \
  --proxy-key ~/.iddio/agents/my-proxy/agent.key \
  --sync-interval 30s

The control plane responds to GET /api/v1/proxy/config with an ETag computed from the current policy version and active agent list. The proxy only deserializes and applies the new config when the ETag changes — a no-op poll costs one HTTP round-trip.

Proxies identify themselves via mTLS client certificates signed by iddio’s CA. The control plane verifies the certificate and maps the SPIFFE URI to a registered proxy identity.

Audit Forwarding

In managed mode, audit events are written locally first (AuditLogger) and forwarded to the control plane in batches (RemoteAuditSink). The DualAuditLogger combines both:

agent request
    → local audit.jsonl (hash-chained, always written)
    → batch buffer → POST /api/v1/proxy/audit (every 5s or 100 events)

If the control plane is unreachable, events accumulate in the local buffer and flush when connectivity is restored. The hash chain is maintained locally; the control plane stores the events as-received for querying and export.

REST API

The control plane exposes a versioned REST API at /api/v1/:

GET    /api/v1/org                    Organization info
POST   /api/v1/clusters               Register a cluster
GET    /api/v1/clusters               List clusters
DELETE /api/v1/clusters/:id           Remove a cluster

POST   /api/v1/agents                 Create agent (issues cert + kubeconfig)
GET    /api/v1/agents                 List agents
DELETE /api/v1/agents/:id             Revoke agent

GET    /api/v1/policy                 Get current policy YAML
PUT    /api/v1/policy                 Update policy (creates new version)
GET    /api/v1/policy/history         List policy versions

GET    /api/v1/audit                  Query audit events (filters: agent, tier, start/end)
GET    /api/v1/audit/export           Export audit events (JSON or CSV)
GET    /api/v1/audit/stats            Aggregated statistics

GET    /api/v1/approvals              List pending approvals
POST   /api/v1/approvals/:id/approve  Approve an escalation
POST   /api/v1/approvals/:id/deny     Deny an escalation

GET    /api/v1/operators              List operators
PUT    /api/v1/operators/:id          Update operator role
DELETE /api/v1/operators/:id          Remove operator

All endpoints require a valid JWT or session cookie. The proxy/ endpoints (/api/v1/proxy/*) require mTLS client certificates and are not accessible to human operators.

Database Schema

The control plane uses five core tables plus time-partitioned audit events:

  • org — organization identity and config
  • clusters — registered Kubernetes clusters
  • agents — agent identities with certificate metadata and cluster assignments
  • policies — versioned policy YAML with audit trail
  • operators — operator accounts with roles and OIDC subject linkage
  • audit_events — partitioned by month for query performance; auto-creates partitions 3 months ahead

Run iddio-server migrate to apply all migrations. Each migration is idempotent.

Tenant Provisioning

For multi-tenant SaaS deployments, the tenant provisioner creates isolated databases per customer:

# Provision a new tenant
iddio-server tenant create acme --plan enterprise

# List all tenants
iddio-server tenant list

# Deprovision (requires --force)
iddio-server tenant delete acme --force

Each tenant gets a separate PostgreSQL database (iddio_acme), separate migrations, and a separate deployment. Traffic is routed by subdomain (acme.iddio.dev). There is no shared database or cross-tenant query path.

Iddio acts as an MCP (Model Context Protocol) gateway — a policy-enforced intermediary between AI agents and MCP tool servers. Every tool call is classified into a risk tier, evaluated against policy, and written to the audit log. The same 5-tier system that governs Kubernetes, SSH, and Terraform applies to MCP tool calls.

How it works

MCP uses JSON-RPC 2.0 for communication between clients (AI agents) and servers (tool providers). Iddio sits between them, adding three capabilities that MCP doesn’t provide natively:

  1. Classification — every tool call is mapped to a risk tier (T0–T4)
  2. Policy enforcement — per-agent rules determine which tools are allowed, denied, or escalated
  3. Audit logging — every tool call is recorded with agent identity, classification, and decision

Protocol translation

The MCP classifier doesn’t classify tool calls from scratch. It translates them back into the native protocol they represent, then delegates to the existing classifier for that protocol.

When an agent calls kubernetes_get_secret, the classifier translates the tool name and arguments into an HTTP method and Kubernetes API path — GET /api/v1/namespaces/prod/secrets/db-creds — and hands it to the Kubernetes classifier. That classifier already knows GET on secrets is T3 (sensitive).

func (c *MCPClassifier) Classify(toolName string, args map[string]any) Classification {
    if strings.HasPrefix(toolName, "kubernetes_") {
        return c.classifyKubernetes(toolName, args)
    }
    if strings.HasPrefix(toolName, "ssh_") {
        return c.classifySSH(toolName, args)
    }
    // ... other protocols

    // Unknown tools — fail closed at T4
    return Classification{Tier: TierBreakGlass, Resource: toolName, Protocol: "mcp"}
}
!

Unknown tools that don’t match any protocol prefix are assigned T4 (break-glass) by default. This is fail-closed by design — configure custom tiers for known tools to lower their classification.

Transports

Iddio supports two MCP transports:

TransportDescriptionUse case
stdioJSON-RPC over stdin/stdoutLocal development, CLI agents
Streamable HTTPJSON-RPC over HTTP POSTServer deployments, remote clients

Stdio transport

For local development, configure your MCP client to launch iddio as a subprocess:

{
  "mcpServers": {
    "iddio": {
      "command": "/path/to/iddio-desktop",
      "args": ["--mcp-bridge"]
    }
  }
}

Streamable HTTP transport

For server deployments, point your MCP client at the /mcp endpoint:

# The server exposes /mcp with bearer token authentication
curl -X POST https://your-server.example.com/mcp \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{...}}'

Policy-filtered tool discovery

When an agent sends tools/list, iddio classifies every registered tool and evaluates each against the calling agent’s policy. Tools that policy would deny are stripped from the response — the agent never sees them.

This is defense in depth: even if an agent calls a hidden tool directly, the tools/call handler runs the same classify-then-enforce pipeline and denies it. But filtering at discovery saves the agent from wasting tokens on operations it can’t perform.

Progressive disclosure

For large tool catalogs, iddio supports progressive disclosure. Instead of listing all tools upfront, the initial tools/list response contains only meta-tools:

  • iddio_list_categories — lists available tool categories
  • iddio_describe_tools — reveals tools in a specific category

This reduces token usage for agents that only need a subset of available tools. As agents explore categories, discovered tools are added to their session’s visible tool list.

progressive_disclosure:
  mode: categories
  max_initial_tools: 20
  idle_prune_after: 5m
  pinned_tools:
    - kubernetes_get_pods
    - kubernetes_get_logs
!

Pinned tools are always visible in tools/list regardless of progressive disclosure settings.

Upstream proxy

Iddio proxies to external MCP servers. Register upstream servers, and iddio discovers their tools, merges them into its catalog, and applies policy enforcement to forwarded calls.

mcp:
  upstreams:
    - name: internal-db
      url: https://db-mcp.internal:8443/mcp
      transport: streamable-http
      tools_prefix: db
      auth_type: bearer

    - name: monitoring
      url: https://grafana-mcp.internal:8443/mcp
      transport: streamable-http
      tools_prefix: grafana

Configurable prefixes resolve tool name collisions between upstreams. The proxy strips the prefix before forwarding to the upstream.

Step-up authentication

When a tool call receives an Escalate decision from policy, the server returns a JSON-RPC error with code -32001 and includes an approval_url where the operator can approve or deny the request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32001,
    "message": "approval required",
    "data": {
      "scope_required": "mcp:tools:write",
      "approval_url": "https://server.example.com/approvals/abc-123"
    }
  }
}

For desktop app users, approval events are pushed via WebSocket and shown as native approval dialogs — the same UI used for local Kubernetes escalations.

Per-tool rate limits

Rate limiting can be configured per-tool or by pattern:

mcp:
  rate_limits:
    - tool: "kubernetes_delete*"
      max: 5
      window: 1m
    - tool: "*"
      max: 100
      window: 1m

When a rate limit is exceeded, the server returns a JSON-RPC error with the standard -32000 code and a Retry-After header.

Enterprise server endpoint

The enterprise control plane exposes its own /mcp endpoint that aggregates all configured upstream MCP servers. Operators connect their desktop app or any MCP client directly to the server — no local proxy required.

The server endpoint provides:

  • Centralized policy — one policy governs all MCP tool access across the organization
  • Centralized audit — all tool calls are recorded in the server’s PostgreSQL database
  • Upstream aggregation — tools from multiple MCP servers appear as a single catalog
  • Automatic tool sync — the server periodically discovers new tools from upstreams
  • Blocking approvals — escalated tool calls block until an operator approves or denies

Audit fields

MCP tool calls add three fields to audit events:

FieldDescription
mcp_toolThe tool name (e.g., kubernetes_get_pods)
mcp_upstreamThe upstream server that handled the call
mcp_session_idThe MCP session ID for the connection

These fields are queryable in the audit API and visible in the dashboard’s audit log page.

Need help?

Can't find what you're looking for? Reach out to the engineering team.