APAI.runv0.1
APAI v0.1 documentation

Docs

How APAI fits with the rest of your AI agent stack. Integrations, security model, deployment patterns, observability, and curated references to the broader ecosystem. For the formal specifications, see /spec.

Getting Started

Install the apai CLI, run your first install, read your first Capability Passport, generate your first install receipt.

Read ->

Integrations

Connect APAI to LangChain, LangGraph, CrewAI, Continue.dev, Claude Code, Codex, Cursor, Gemini CLI. Real code examples.

Read ->

Security model

MCP security primer. Zero Trust for AI agents. MCP Gateway as control plane. Threat model. What APAI's v0.1 scanner actually catches.

Read ->

Deployment patterns

Local-tool, cloud-sandbox, remote-connector install modes. Self-hosted, air-gapped, VPC, Kubernetes. When to use each.

Read ->

Observability

Export traces from the MCP Gateway to Langfuse, LangSmith, Arize Phoenix, Helicone. Track tool calls, latency, cost, agent decisions.

Read ->

Ecosystem references

MCP spec, llms.txt standard, Agent Skills format, Microsoft APM, OpenAI Apps SDK, Claude connectors, Gemini extensions, GitHub skills. Curated link set.

Read ->

Specs vs docs - what is the difference?

Specs at /spec are versioned protocol documents: Manifest, Capability Passport, Install Receipt, Policy Pack, Install Card. They define the data shapes and behavioral contracts that publishers, agents, and the registry must agree on.

Docs at /docs explain how to use APAI in practice: integration patterns, deployment choices, security tradeoffs, observability wiring, and how APAI sits relative to the broader AI capability ecosystem. Specs are normative; docs are practical.

v0.1 doc status

These docs ship at v0.1. They describe the APAI model and integrations against publicly documented surfaces. Where APAI is scaffolded vs. fully shipped, the doc says so directly. See honest status for the full shipped vs. stubbed vs. not-built breakdown.