Skip to main content

ICP Platform - Kubernetes Integration Guide

Complete guide for integrating ICP (Identity Control Plane) workload identity on Kubernetes clusters (EKS, AKS, GKE, on-premise).

Table of Contents


Prerequisites

Cluster Requirements

  • Kubernetes Version: 1.20+
  • Container Runtime: containerd, CRI-O, or Docker
  • Node Count: Any (agent runs as DaemonSet)
  • RBAC: Enabled
  • Networking: Outbound HTTPS (443) to ICP Server

Tools Required

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# Install Helm (recommended)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Verify access
kubectl cluster-info
kubectl get nodes

Credentials

You'll need:

  1. Tenant ID - Provided by AuthSec
  2. ICP Server URL - Provided by Authsec
  3. Cluster Name - Unique identifier for this cluster (Configured by customer)

Architecture Overview

Kubernetes Deployment Architecture

┌───────────────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ ICP Agent (DaemonSet) │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Node 1 │ │ Node 2 │ │ Node 3 │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ icp-agent │ │ icp-agent │ │ icp-agent │ │ │
│ │ │ (Pod) │ │ (Pod) │ │ (Pod) │ │ │
│ │ │ │ │ │ │ │ │ │
│ │ │ Socket: │ │ Socket: │ │ Socket: │ │ │
│ │ │ /run/spire/ │ │ /run/spire/ │ │ /run/spire/ │ │ │
│ │ │ sockets/ │ │ sockets/ │ │ sockets/ │ │ │
│ │ │ agent.sock │ │ agent.sock │ │ agent.sock │ │ │
│ │ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │ │
│ │ │ │ │ │ │
│ │ │ hostPath │ hostPath │ hostPath │ │
│ │ │ │ │ │ │
│ └─────────┼──────────────────┼──────────────────┼─────────────────┘ │
│ │ │ │ │
│ ┌─────────▼──────────┐ ┌────▼──────────┐ ┌─────▼─────────┐ │
│ │ Workload Pods │ │ Workload Pods │ │ Workload Pods │ │
│ │ (on Node 1) │ │ (on Node 2) │ │ (on Node 3) │ │
│ │ │ │ │ │ │ │
│ │ ┌────────────────┐ │ │┌────────────┐ │ │┌────────────┐ │ │
│ │ │ api-gateway │ │ ││web-service │ │ ││db-service │ │ │
│ │ │ + SDK │ │ ││+ SDK │ │ ││+ SDK │ │ │
│ │ └────────────────┘ │ │└────────────┘ │ │└────────────┘ │ │
│ │ │ │ │ │ │ │
│ │ Volume: │ │Volume: │ │Volume: │ │
│ │ hostPath → │ │hostPath → │ │hostPath → │ │
│ │ /run/spire/sockets │ │/run/spire/... │ │/run/spire/... │ │
│ │ │ │ │ │ │ │
│ │ Env (Downward API):│ │Env: │ │Env: │ │
│ │ POD_NAME │ │POD_NAME │ │POD_NAME │ │
│ │ POD_NAMESPACE │ │POD_NAMESPACE │ │POD_NAMESPACE │ │
│ │ POD_UID │ │POD_UID │ │POD_UID │ │
│ │ POD_LABEL_APP │ │POD_LABEL_APP │ │POD_LABEL_APP │ │
│ └────────────────────┘ └───────────────┘ └───────────────┘ │
│ │
└───────────────────────────────┬───────────────────────────────────────┘

│ HTTPS + mTLS

┌──────────────────────────────┐
│ ICP Server (SaaS) │
│ - Agent SVID Issuance │
│ - Workload SVID Issuance │
└──────────────────────────────┘

How Kubernetes Attestation Works

The agent uses the Kubernetes Plugin to collect pod metadata as selectors:

SelectorDescriptionExample
k8s:nsPod namespace"default"
k8s:saService account"my-app-sa"
k8s:pod-namePod name"my-app-7984bc7b57-9xsk4"
k8s:pod-uidPod UID"e1067eac-41e1-4cfa-a181-939f3b2cf6ba"
k8s:pod-label:<key>Pod label"k8s:pod-label:app": "my-app"
k8s:node-nameNode name"k8s-node-01"

Metadata is collected from:

  1. Kubernetes Downward API (environment variables)
  2. Kubernetes API Server (agent queries for additional metadata)

Installation

Step 1: Add Helm Repository

# Add AuthSec Helm repo
helm repo add authsec https://charts.authsec.ai
helm repo update

Step 2: Create values.yaml

cat > icp-agent-values.yaml <<EOF
# ICP Agent Configuration
image:
repository: your-docker-registry.example.com/icp-agent
tag: latest
pullPolicy: Always

# Agent settings
agent:
tenantId: "your-tenant-id-here"
clusterName: "my-k8s-cluster"
icpServiceUrl: "https://your-icp-server.example.com/spiresvc"
logLevel: info
socketPath: /run/spire/sockets/agent.sock

# Service Account
serviceAccount:
create: true
name: icp-agent

# Security Context
securityContext:
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
add:
- SYS_PTRACE # Required for process attestation
seccompProfile:
type: RuntimeDefault

# Resources
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "100m"
memory: "128Mi"

# Health probes
healthProbe:
enabled: true
port: 8080
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3

# Tolerations (run on all nodes)
tolerations:
- operator: Exists

# Node selector (optional - restrict to specific nodes)
nodeSelector: {}
# role: worker

# Affinity (optional)
affinity: {}
EOF

Step 3: Install Agent

# Install in default namespace
helm install icp-agent authsec/icp-agent \
-f icp-agent-values.yaml \
--namespace default \
--create-namespace

# Wait for DaemonSet to be ready
kubectl rollout status daemonset/icp-agent -n default

Step 4: Verify Installation

# Check DaemonSet
kubectl get daemonset -n default

# Check pods (should be 1 per node)
kubectl get pods -n default -l app=icp-agent -o wide

# Check logs
kubectl logs -n default -l app=icp-agent --tail=50

# Check health
kubectl exec -n default -l app=icp-agent -- curl http://localhost:8080/healthz

Expected output:

{"status":"healthy"}

Method 2: kubectl (Manual Deployment)

If you prefer not to use Helm, here are raw Kubernetes manifests:

Step 1: Create Namespace

kubectl create namespace default

Step 2: Deploy ConfigMap

kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: icp-agent-config
namespace: default
labels:
app: icp-agent
data:
config.yaml: |
agent:
tenant_id: "your-tenant-id-here"
node_id: "\${NODE_NAME}"
data_dir: "/var/lib/icp-agent"
socket_path: "/run/spire/sockets/agent.sock"
renewal_threshold: "6h"

icp_service:
address: "https://dev.api.authsec.dev/spiresvc"
trust_bundle_path: "/etc/icp-agent/ca-bundle.pem"
timeout: 30
max_retries: 3
retry_backoff: 5

attestation:
type: "kubernetes"
kubernetes:
token_path: "/var/run/secrets/kubernetes.io/serviceaccount/token"
cluster_name: "my-k8s-cluster"
unix:
method: "procfs"

security:
cache_encryption_key: ""
cache_path: "/var/lib/icp-agent/cache/svid.cache"

logging:
level: "info"
format: "json"
file_path: ""

health:
enabled: true
port: 8080
bind_address: "0.0.0.0"
EOF

Step 3: Deploy RBAC

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: icp-agent
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: icp-agent
rules:
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: icp-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: icp-agent
subjects:
- kind: ServiceAccount
name: icp-agent
namespace: default
EOF

Step 4: Deploy DaemonSet

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: icp-agent
namespace: default
labels:
app: icp-agent
spec:
selector:
matchLabels:
app: icp-agent
template:
metadata:
labels:
app: icp-agent
spec:
serviceAccountName: icp-agent
hostPID: true
hostNetwork: false

initContainers:
- name: init-socket-dir
image: busybox:1.36
command:
- sh
- -c
- |
mkdir -p /run/spire/sockets
chmod 0777 /run/spire/sockets
volumeMounts:
- name: spire-agent-socket-dir
mountPath: /run/spire/sockets

containers:
- name: icp-agent
image: your-docker-registry.example.com/icp-agent:latest
imagePullPolicy: Always

command:
- "icp-agent"
- "-c"
- "/etc/icp-agent/config.yaml"

env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name

securityContext:
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
add:
- SYS_PTRACE
seccompProfile:
type: RuntimeDefault

volumeMounts:
- name: config
mountPath: /etc/icp-agent
readOnly: true
- name: spire-agent-socket-dir
mountPath: /run/spire/sockets
readOnly: false
- name: agent-data
mountPath: /var/lib/icp-agent
readOnly: false
- name: agent-tmp
mountPath: /tmp
readOnly: false
- name: proc
mountPath: /proc
readOnly: true
- name: sa-token
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readOnly: true

resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "100m"
memory: "128Mi"

livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 60

readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 30

volumes:
- name: config
configMap:
name: icp-agent-config
- name: spire-agent-socket-dir
hostPath:
path: /run/spire/sockets
type: DirectoryOrCreate
- name: agent-data
emptyDir:
sizeLimit: 1Gi
- name: agent-tmp
emptyDir:
sizeLimit: 512Mi
- name: proc
hostPath:
path: /proc
type: Directory
- name: sa-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600

tolerations:
- operator: Exists

dnsPolicy: ClusterFirst
EOF

Workload Registration

Understanding Node Matching

CRITICAL: The parent_id in workload registration MUST match the agent's SPIFFE ID where the workload pod is running.

Agent SPIFFE ID format:

spiffe://<tenant-id>/agent/<node-name>

Finding Your Workload's Node

# Deploy your workload first
kubectl apply -f your-workload.yaml

# Find which node it's running on
kubectl get pods -n default -o wide

# Example output:
# NAME READY STATUS NODE
# my-app-7984bc7b57-9xsk4 1/1 Running k8s-node-01

Workload Deployment Example

File: my-app-deployment.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: default
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v1
environment: production
spec:
serviceAccountName: my-app
nodeSelector:
kubernetes.io/hostname: node-name # Must match the registered Parent Agent ID
containers:
- name: my-app
image: my-registry.example.com/my-app:latest
ports:
- containerPort: 8080

env:
# CRITICAL: SPIFFE socket path
- name: SPIFFE_ENDPOINT_SOCKET
value: "unix:///run/spire/sockets/agent.sock"

# CRITICAL: Kubernetes Downward API for attestation
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: POD_LABEL_APP
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']

# Application config
- name: LOG_LEVEL
value: "info"

volumeMounts:
# CRITICAL: Mount agent socket
- name: spire-agent-socket
mountPath: /run/spire/sockets
readOnly: true

resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"

livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 30

readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10

volumes:
# CRITICAL: hostPath volume for agent socket
- name: spire-agent-socket
hostPath:
path: /run/spire/sockets
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: default
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

Deploy:

kubectl apply -f my-app-deployment.yaml

Registering Workload

This is the most common pattern for Kubernetes workloads:

# Get node name where pod is running
export NODE_NAME=$(kubectl get pod -n default -l app=my-app -o jsonpath='{.items[0].spec.nodeName}')

# Set variables
export ICP_SERVER_URL="https://dev.api.authsec.dev/spiresvc"
export TENANT_ID="your-tenant-id-here"

# Register workload
curl -X POST "${ICP_SERVER_URL}/api/v1/workloads" \
-H "Content-Type: application/json" \
-d '{
"spiffe_id": "spiffe://your-spiffe-id",
"parent_id": "spiffe://your-parent-id",
"selectors": {
"k8s:ns": "default",
"k8s:pod-label:app": "my-app"
},
"ttl": 3600
}'

This registers ALL pods with label app=my-app in namespace default on node k8s-node-01.

Strategy 2: By Service Account

For fine-grained access control:

curl -X POST "${ICP_SERVER_URL}/api/v1/workloads" \
-H "Content-Type: application/json" \
-d '{
"spiffe_id": "spiffe://your-spiffe-id",
"parent_id": "spiffe://your-parent-id",
"selectors": {
"k8s:ns": "default",
"k8s:sa": "my-app",
"k8s:pod-label:app": "my-app"
},
"ttl": 3600
}'

Strategy 3: By Multiple Labels

For environment-specific workloads:

curl -X POST "${ICP_SERVER_URL}/api/v1/workloads" \
-H "Content-Type: application/json" \
-d '{
"spiffe_id": "spiffe://your-spiffe-id",
"parent_id": "spiffe://your-parent-id",
"selectors": {
"k8s:ns": "default",
"k8s:pod-label:app": "my-app",
"k8s:pod-label:environment": "production"
},
"ttl": 3600
}'

Multi-Node Deployments

If your pods run across multiple nodes, register for each node:

# Get all nodes
kubectl get nodes -o name

# Register for each node
for NODE in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do
echo "Registering for node: $NODE"
curl -X POST "${ICP_SERVER_URL}/api/v1/workloads" \
-H "Content-Type: application/json" \
-d '{
"spiffe_id": "spiffe://your-spiffe-id",
"parent_id": "spiffe://your-parent-id",
"selectors": {
"k8s:ns": "default",
"k8s:pod-label:app": "my-app"
}
}'
done

SDK Integration

Installation

Add AuthSec SDK to your application's dependencies:

Python:

# Dockerfile
FROM python:3.11-slim

# Install SDK
RUN pip install git+https://github.com/authsec-ai/sdk-authsec.git

# Copy application
COPY . /app
WORKDIR /app

CMD ["python", "app.py"]

requirements.txt:

git+https://github.com/authsec-ai/sdk-authsec.git
httpx
fastapi
uvicorn

Quick Start Example

File: app.py

import asyncio
from authsec_sdk import QuickStartSVID
import httpx
from fastapi import FastAPI
import uvicorn

app = FastAPI()

# Global SVID instance
svid = None

@app.on_event("startup")
async def startup():
global svid
# Initialize SDK (reads from SPIFFE_ENDPOINT_SOCKET env var)
svid = await QuickStartSVID.initialize()
print(f"✅ Pod authenticated as: {svid.spiffe_id}")
print(f"📜 Certificate expires: {svid.expires_at}")

@app.get("/healthz")
async def health():
return {"status": "healthy"}

@app.get("/ready")
async def ready():
# Check if we have a valid SVID
if svid and svid.spiffe_id:
return {"ready": True}
return {"ready": False}, 503

@app.post("/call-backend")
async def call_backend():
"""Make mTLS request to another service"""
# Create fresh SSL context (handles cert renewal)
ssl_context = svid.create_ssl_context_for_client()

async with httpx.AsyncClient(verify=ssl_context) as client:
response = await client.post(
"https://backend-service.default.svc.cluster.local:8443/api",
json={"data": "value"}
)
return response.json()

if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8080)

Building Docker Image

FROM python:3.11-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY app.py .

# Run as non-root user
RUN useradd -m -u 1000 appuser
USER appuser

# Expose port
EXPOSE 8080

# Run application
CMD ["python", "app.py"]

Build and push:

docker build -t my-registry.example.com/my-app:latest .
docker push my-registry.example.com/my-app:latest

Service-to-Service mTLS

Client Example

from authsec_sdk import QuickStartSVID
import httpx
import asyncio

async def call_payment_service():
# Get SVID
svid = await QuickStartSVID.initialize()

# Make mTLS request
ssl_context = svid.create_ssl_context_for_client()

async with httpx.AsyncClient(verify=ssl_context) as client:
response = await client.post(
"https://payment-service.default.svc.cluster.local:8443/pay",
json={
"amount": 100.00,
"currency": "USD",
"order_id": "ORDER-123"
}
)
return response.json()

if __name__ == "__main__":
result = asyncio.run(call_payment_service())
print(f"Payment result: {result}")

Server Example (FastAPI)

from fastapi import FastAPI, Request
from authsec_sdk import QuickStartSVID
import uvicorn

app = FastAPI()
svid = None

@app.on_event("startup")
async def startup():
global svid
svid = await QuickStartSVID.initialize()
print(f"Payment service authenticated as: {svid.spiffe_id}")

@app.post("/pay")
async def process_payment(request: Request, payment: dict):
# Get client's SPIFFE ID from TLS certificate
# (when running with mTLS, this is available in request.client)

# Verify client is authorized
# allowed_clients = [
# "spiffe://your-trust-domain.example.com/workload/order-service"
# ]

# Process payment
return {
"status": "success",
"transaction_id": "TXN-123"
}

if __name__ == "__main__":
# Note: In production, use a process manager that handles cert rotation
uvicorn.run(
app,
host="0.0.0.0",
port=8443,
ssl_certfile=svid.cert_file_path,
ssl_keyfile=svid.key_file_path,
ssl_ca_certs=svid.ca_file_path
)

Production Deployment

High Availability

For production, ensure:

  1. Multiple Replicas

    spec:
    replicas: 3
    strategy:
    type: RollingUpdate
    rollingUpdate:
    maxUnavailable: 1
    maxSurge: 1
  2. Pod Disruption Budget

    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
    name: my-app-pdb
    spec:
    minAvailable: 2
    selector:
    matchLabels:
    app: my-app
  3. Resource Limits

    resources:
    requests:
    memory: "256Mi"
    cpu: "100m"
    limits:
    memory: "512Mi"
    cpu: "500m"

Monitoring

Prometheus ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: icp-agent
namespace: default
spec:
selector:
matchLabels:
app: icp-agent
endpoints:
- port: metrics
interval: 30s
path: /metrics

Grafana Dashboard

Import dashboard ID: 15000 (ICP Agent Metrics)

Key metrics:

  • icp_agent_svid_issued_total - Total SVIDs issued
  • icp_agent_cache_hits_total - Cache hit rate
  • icp_agent_health_status - Agent health

Log Aggregation

Fluentd DaemonSet:

apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/icp-agent-*.log
pos_file /var/log/fluentd-icp-agent.pos
tag icp.agent
format json
</source>

<match icp.agent>
@type elasticsearch
host elasticsearch.logging.svc.cluster.local
port 9200
index_name icp-agent
</match>

Security

Network Policies

Restrict agent communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: icp-agent-netpol
namespace: default
spec:
podSelector:
matchLabels:
app: icp-agent
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow ICP Server
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 443

Pod Security Standards

apiVersion: v1
kind: Namespace
metadata:
name: default
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted

Troubleshooting

Issue 1: Agent Pods Not Starting

# Check pod status
kubectl get pods -n default -l app=icp-agent

# Describe pod
kubectl describe pod -n default -l app=icp-agent

# Check logs
kubectl logs -n default -l app=icp-agent --tail=100

Common fixes:

  • Check image pull secrets: kubectl get secret -n default
  • Verify RBAC: kubectl auth can-i get pods --as=system:serviceaccount:default:icp-agent
  • Check node resources: kubectl describe node

Issue 2: Workload Can't Connect to Agent

# Check socket exists in pod
kubectl exec -it my-app-pod -- ls -l /run/spire/sockets/

# Check socket permissions
kubectl exec -it my-app-pod -- stat /run/spire/sockets/agent.sock

# Test from pod
kubectl exec -it my-app-pod -- python3 -c "
from authsec_sdk import QuickStartSVID
import asyncio
svid = asyncio.run(QuickStartSVID.initialize())
print(svid.spiffe_id)
"

Issue 3: Wrong SPIFFE ID Issued

# Check collected selectors (agent logs)
kubectl logs -n default -l app=icp-agent | grep "Collected selectors"

# Check registered workload entries
curl "${ICP_SERVER_URL}/api/v1/workloads?parent_id=spiffe://${TENANT_ID}/agent/${NODE_NAME}"

# Check pod labels
kubectl get pod my-app-pod -o jsonpath='{.metadata.labels}'

# Check Downward API env vars
kubectl exec my-app-pod -- env | grep POD_

Best Practices

1. Use Labels for Workload Identification

metadata:
labels:
app: my-app
version: v1
environment: production
team: backend

Register with multiple labels for fine-grained control.

2. Use Service Accounts

Create dedicated service accounts for each workload:

apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-service
namespace: default

3. Implement Health Checks

@app.get("/ready")
async def ready():
# Check SVID is valid
if svid and svid.spiffe_id and not svid.is_expired():
return {"ready": True}
return {"ready": False}, 503

4. Handle Certificate Rotation

Always create fresh SSL context:

# ✅ CORRECT
ssl_context = svid.create_ssl_context_for_client()
response = await client.post(url, verify=ssl_context)

# ❌ WRONG - Reusing old context
ssl_context = svid.create_ssl_context_for_client()
while True:
response = await client.post(url, verify=ssl_context) # Won't get new certs!

5. Use Namespace Isolation

Deploy workloads in separate namespaces:

kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace database

Next Steps

Questions? Contact support@authsec.dev