How ICP Platform Works (Identity Control Plane)
Part 3 of the Zero-Trust Workload Identity Series
← Previous: Understanding SPIFFE and SPIRE | Next: Comparing Solutions →
The ICP Platform Approach
ICP (Identity Control Plane) Platform is a multi-tenant SaaS implementation of SPIFFE/SPIRE that eliminates operational burden while adding enterprise features.
Core Philosophy
"You shouldn't have to run your own identity infrastructure"
Just like you don't run your own email servers (Gmail), DNS servers (CloudFlare), or auth providers (Auth0), you shouldn't run your own workload identity infrastructure.
SaaS vs Self-Hosted SPIRE
Traditional SPIRE (Self-Hosted)
┌─────────────────────────────────────┐
│ YOUR INFRASTRUCTURE │
│ │
│ SPIRE Server (YOU manage) │
│ - High availability │
│ - Database │
│ - Backups, patches, monitoring │
│ │
│ SPIRE Agents (YOU manage) │
│ - Deploy to every node │
│ - Configuration, upgrades │
└─────────────────────────────────────┘
Time to production: 06-12 weeks
Operational burden: HIGH
ICP Platform (Managed Multitenant SaaS)
┌─────────────────────────────────────┐
│ AUTHSEC CLOUD (AuthSec manages) │
│ │
│ ICP Server │
│ - Multi-tenant │ │
│ - Auto-scaling, patches, DR │
└────────────┬────────────────────────┘
│ HTTPS + mTLS
▼
┌─────────────────────────────────────┐
│ Customer INFRASTRUCTURE │
│ │
│ ICP Agent (deploy once) │
│ - DaemonSet / Docker / systemd │
└─────────────────────────────────────┘
Time to production: ~60 minutes
Operational burden: MINIMAL
Architecture Components
1. ICP Server (AuthSec Managed)
Location: AuthSec cloud (multi-region)
Key components:
Certificate Authority
Root CA (HSM-backed)
└─ Intermediate CA (per tenant basis)
└─ Agent SVIDs
└─ Workload SVIDs
- Root CA in Hardware Security Module
- Isolated CA per tenant
- 1-hour default TTL
Workload Registry
workload_entries:
- spiffe_id: spiffe://domain/workload/frontend
- parent_id: spiffe://tenant-id/agent/node-01
- selectors: {"k8s:ns": "prod", "k8s:pod-label:app": "frontend"}
- Multi-tenant isolation
- REST API for management
- Audit logs
Attestation Services
/api/v1/agent/attest # Agent node attestation
/api/v1/workload/attest # Workload SVID issuance
2. ICP Agent (Customer Infrastructure)
Deployment:
- Kubernetes: DaemonSet (1 pod per node)
- Docker: Container with shared socket
- VMs: systemd service
Key modules:
Node Attestation
# Kubernetes example
token = read_file("/var/run/secrets/kubernetes.io/serviceaccount/token")
response = post("https://icp-server/api/v1/agent/attest",
json={"token": token})
agent_svid = response.json()["svid"]
Workload Attestation Plugins
Kubernetes selectors:
{
"k8s:ns": "production",
"k8s:sa": "frontend-sa",
"k8s:pod-label:app": "frontend"
}
Docker selectors:
{
"docker:label:app": "frontend",
"docker:container-name": "frontend",
"docker:image-name": "frontend:v1.2.3"
}
Unix selectors:
{
"unix:uid": "1000",
"unix:path": "/opt/app/bin/frontend",
"unix:sha256": "abc123..."
}
Workload API Server
gRPC over Unix socket: /run/spire/sockets/agent.sock
Flow:
- Workload connects to socket
- Agent identifies by PID
- Collects selectors
- Checks cache (
{pid}:{hash(selectors)}) - Issues/retrieves SVID
- Streams to workload
- Auto-renews at 90% TTL
SVID Cache Manager
# Composite cache key prevents collisions
cache_key = f"{pid}:{hash(selectors)}"
workload_svids[cache_key] = svid
3. AuthSec SDK (Application Integration)
Installation:
pip install git+https://github.com/authsec-ai/sdk-authsec.git
QuickStart usage:
from authsec_sdk import QuickStartSVID
# Initialize (connects to agent socket)
svid = await QuickStartSVID.initialize(socket_path="your/agent/path.sock")
# Client-side mTLS
ssl_context = svid.create_ssl_context_for_client()
response = await client.post("https://api/endpoint", verify=ssl_context)
# Server-side mTLS (FastAPI example)
ssl_keyfile, ssl_certfile, ssl_ca_certs = svid.get_ssl_paths()
uvicorn.run(app, ssl_keyfile=ssl_keyfile, ssl_certfile=ssl_certfile,
ssl_ca_certs=ssl_ca_certs)
Features:
- Auto-connects to agent socket
- Handles certificate renewal
- Creates fresh SSL contexts (client-side)
- Works with requests, httpx, aiohttp, FastAPI
End-to-End Flow Example
Scenario: Frontend service calls Payment service
Step 1: Frontend Requests SVID
Frontend Pod starts
↓
SDK connects to /run/spire/sockets/agent.sock
↓
Agent identifies: PID=42, collects selectors
{
"k8s:ns": "production",
"k8s:pod-label:app": "dev-frontend"
}
↓
Agent → ICP Server: Attest workload with selectors
↓
ICP Server matches workload entry:
{
"spiffe_id": "spiffe://prod.example.com/workload/frontend",
"selectors": {"k8s:ns": "production", "k8s:pod-label:app": "frontend"}
}
↓
ICP Server issues SVID (1-hour TTL)
↓
Agent caches SVID with key "42:{hash(selectors)}"
↓
Agent streams to Frontend
↓
SDK writes to /tmp/spiffe-certs/
Step 2: Payment Service Gets SVID
(Same flow, different selectors)
Payment selectors:
{
"k8s:ns": "production",
"k8s:pod-label:app": "payment"
}
Issued SPIFFE ID:
spiffe://prod.example.com/workload/payment
Step 3: Frontend → Payment mTLS Connection
Frontend makes request:
↓
ssl_context = svid.create_ssl_context_for_client()
- Loads /tmp/spiffe-certs/cert.pem (Frontend SVID)
- Loads /tmp/spiffe-certs/key.pem (Frontend private key)
- Loads /tmp/spiffe-certs/bundle.pem (trust bundle)
↓
TLS Handshake:
1. Frontend presents cert with spiffe://prod.example.com/workload/frontend
2. Payment presents cert with spiffe://prod.example.com/workload/payment
3. Both verify certificates against trust bundle
4. Both verify SPIFFE ID matches expected value
↓
Mutual authentication successful
↓
Encrypted connection established
Step 4: Automatic Renewal (T+54 min)
Agent: SVID expiring at 90% TTL
↓
Agent → ICP Server: Renew SVID
↓
ICP Server: New SVID (expires T+114 min)
↓
Agent updates cache
↓
Agent streams update to Frontend
↓
SDK writes new cert to /tmp/spiffe-certs/
↓
Next request uses fresh certificate
(No application restart needed!)
Key Innovations
1. Composite Cache Keys
Problem: Kubernetes pods connect with PID -1, causing cache collisions.
Solution:
cache_key = f"{pid}:{hash(selectors)}"
# Example: "-1:8472934" vs "-1:2847362"
Each workload gets unique cache entry even with same PID.
2. Remote Headless Architecture
Traditional SPIRE: Server runs in your infrastructure.
ICP Platform: Server runs in AuthSec cloud.
Benefits:
- No server management
- Multi-tenant isolation
- Automatic scaling
3. Multi-Plugin Attestation
Kubernetes + Docker + Unix support in single agent:
# Agent detects environment and uses appropriate plugin
if k8s_metadata:
selectors.update(collect_k8s_selectors())
if docker_metadata:
selectors.update(collect_docker_selectors())
if unix_process:
selectors.update(collect_unix_selectors())
Single agent works across all environments.
4. REST API for Management
Traditional SPIRE:
# Manual database entry
psql -c "INSERT INTO workload_entries ..."
ICP Platform (app.authsec.dev):
curl -X POST https://icp-server/api/v1/workloads \
-H "Authorization: Bearer $TOKEN" \
-d '{
"spiffe_id": "spiffe://prod/workload/frontend",
"parent_id": "spiffe://tenant-id/agent/k8s-node-01",
"selectors": {
"k8s:ns": "production",
"k8s:pod-label:app": "frontend"
}
}'
Features:
- GitOps-friendly
- Audit trail
- RBAC
- API-driven automation
Security Architecture
Defense in Depth
Layer 1: Node Attestation
- Agent proves node identity
- K8s token verified with API server
- AWS instance identity verified with AWS
Layer 2: Workload Attestation
- Selectors collected from runtime
- Matched against registered entries
- Can't fake kernel-level metadata
Layer 3: Certificate Verification
- mTLS with X.509 certificates
- Trust bundle validation
- SPIFFE ID verification
Layer 4: Short-Lived Credentials
- 1-hour TTL
- Automatic rotation
- Minimal blast radius if compromised
What's Next?
Now that we understand how ICP Platform works, the next post compares it to alternative solutions (SPIRE OSS, Istio, Vault, Cloud IAM).
- Part 4: Comparing Solutions
- Part 5: Get Started in 5 Minutes
Questions? support@authsec.ai | Get Started: Integration Guides