APAI.runv0.1

Everything you need to install and govern LLM tools for agents.

Seven features that combine verified installation, automatic agent discovery, and governed execution into one platform.

1

Secure Verified Installation

Install from a curated directory of verified apps, frameworks, and tools. Each package has a Capability Passport that declares what it can read, write, access, spend, expose, and what approvals it needs.

Browse Registry->
  • ·Curated, verified directory
  • ·Capability Passport per package
  • ·Permission review before install
  • ·Install receipt with rollback command
  • ·Hidden Unicode + suspicious-pattern scanner (v0.1)
2

Automatic llm.txt Generation

Every install automatically generates llm.txt manifests so your agents discover available capabilities without manual configuration. Follows the open llm.txt specification.

See site-wide llms.txt->
  • ·Toggle during or after install
  • ·Manifest follows official llm.txt specification
  • ·User can view and edit the generated manifest
  • ·Accessible at standard /llms.txt and per-package /packages/{slug}/llms.txt routes
  • ·Same card serves all three install modes: local-tool, cloud-sandbox, remote-connector
3

MCP Integration

Native support for the Model Context Protocol. Register tools as MCP servers so any compatible agent can use them through a standardized protocol.

Integration guides->
  • ·Automatic MCP registration when enabled
  • ·Both local (stdio) and remote (HTTP/SSE) MCP servers
  • ·Compatible with Claude Code, Claude.ai connectors, Gemini CLI extensions, Codex, Cursor, GitHub agent skills
  • ·Future: hosted mcp.apai.run gateway endpoint
4

MCP Gateway (Optional but Powerful)

Add a centralized control plane with RBAC, credential injection, rate limiting, threat detection, and comprehensive audit logging. Zero Trust for AI agents.

MCP Security Whitepaper->
  • ·Role-based access control at the gateway
  • ·Credential injection - never expose secrets to agents
  • ·Comprehensive request/response audit logging
  • ·Token-based rate limiting and cost control
  • ·mTLS and certificate pinning support
  • ·One-click Docker / Kubernetes deployment
5

Local + Enterprise Ready

Run fully locally with Ollama or LM Studio for personal development. Deploy gateways in Kubernetes with Zero Trust networking for production. Air-gapped support for sensitive environments.

Solutions overview->
  • ·Local-first by default
  • ·Optional gateway protection for production
  • ·Self-hosted and air-gapped deployment options
  • ·Kubernetes manifests for enterprise gateway deployments
  • ·Integration with corporate identity providers (Okta, Entra ID, SAML, OIDC)
  • ·VPC-isolated deployments
6

Observability Ready

Export traces from the MCP Gateway to Langfuse, LangSmith, Phoenix, Helicone, and other observability platforms. Unified view of installation history + runtime behavior.

Observability guides->
  • ·Pre-configured integrations with Langfuse, LangSmith, Arize Phoenix, Helicone
  • ·Automatic trace export for tool calls
  • ·Sensitive data redaction options before sending traces
  • ·Unified install history + runtime metrics
  • ·Tool success/failure rates, latency, token usage
7

Passport Diff at Install

When you upgrade a package, see exactly which permissions were added or removed compared to the version you already approved. No other AI capability platform ships this. Approve upgrades with confidence, not guesswork.

See a live example->
  • ·Permission delta shown at upgrade time: additions in green, removals in red
  • ·Risk delta badge: increased / decreased / unchanged at a glance
  • ·Scanner status change surfaced when findings appear or clear
  • ·Rollback instruction changes flagged explicitly
  • ·Links from the install log and package list directly to the diff page
  • ·API endpoint at /api/passport/diff for programmatic upgrade gates

Pick the path that fits your scale.

Free for individual developers. Pro/Team for groups with Gateway governance. Enterprise for self-hosted, air-gapped, and SSO.