APAI.runv0.1
Docs/Integrations

Integrations

Real code patterns for wiring APAI into the agent frameworks and coding tools you already use.

LangChain / LangGraph

Use APAI-installed tools inside LangChain or LangGraph agents through MCP. For production, prefer the MCP Gateway path (Phase 4+) so all tool access is centrally authenticated, audited, and rate-limited.

from langchain_mcp import MCPClient
import os

# Connect to the APAI MCP Gateway (Phase 6: mcp.apai.run)
gateway_url = "https://gateway.apai.run/mcp"
api_key = os.getenv("APAI_API_KEY")

client = MCPClient(
    base_url=gateway_url,
    headers={"Authorization": f"Bearer {api_key}"}
)

# List tools the agent is allowed to call (filtered by gateway RBAC)
tools = client.list_tools()

# Build the LangGraph agent
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(model, tools)
Pair this with LangSmith for tracing. The MCP Gateway emits OpenTelemetry spans that flow into LangSmith automatically when configured.

CrewAI

Use APAI-installed tools as skills/crews in CrewAI. The same gateway pattern works: connect once, scope tools per crew.

from crewai import Agent, Task, Crew
from mcp import MCPClient

mcp = MCPClient("https://gateway.apai.run/mcp", api_key="your-key")

# Tools come from the APAI-managed registry
github_tools = mcp.get_tools("github")
slack_tools  = mcp.get_tools("slack")

researcher = Agent(
    role="Researcher",
    goal="Research using available tools",
    tools=github_tools + slack_tools,
    verbose=True,
)

The llm.txt manifests APAI generates per package give CrewAI agents better context about what each tool does, which reduces misuse.

Continue.dev (VS Code / JetBrains)

Continue.dev is an open-source AI coding agent. Configure it to talk to the APAI MCP Gateway:

{
  "mcpServers": {
    "apai-gateway": {
      "url": "https://gateway.apai.run/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_APAI_TOKEN"
      }
    }
  }
}

Then use @apai-gateway in chat or custom commands to invoke any tool installed via APAI.

Claude Code

Claude Code reads MCP server config from ~/.claude/mcp_servers.json. APAI can register itself as a remote MCP server at install time, or you can wire it manually:

{
  "mcpServers": {
    "apai": {
      "url": "https://gateway.apai.run/mcp",
      "headers": { "Authorization": "Bearer YOUR_APAI_TOKEN" }
    }
  }
}

Once configured, Claude Code can invoke any APAI-managed tool while respecting the package's Capability Passport approval triggers.

Codex / Cursor / Gemini CLI

Codex, Cursor, and Gemini CLI all support MCP servers via similar JSON or TOML config. APAI is platform-neutral: install once, the same gateway URL works for every MCP-compatible client.

# Codex (~/.codex/config.toml)
[mcp_servers.apai]
url = "https://gateway.apai.run/mcp"
auth_header = "Bearer YOUR_APAI_TOKEN"

# Cursor (~/.cursor/mcp.json)
{
  "apai": {
    "url": "https://gateway.apai.run/mcp",
    "headers": { "Authorization": "Bearer YOUR_APAI_TOKEN" }
  }
}

# Gemini CLI (~/.gemini/mcp_servers.json)
{
  "apai": {
    "url": "https://gateway.apai.run/mcp",
    "headers": { "Authorization": "Bearer YOUR_APAI_TOKEN" }
  }
}
v0.1 reminder: the APAI Gateway endpoint at mcp.apai.run ships with Phase 6. For now, APAI runs as a CLI scaffold + spec set; the configs above describe the Phase 4+ wiring so teams can prepare integration.

General MCP-compatible client pattern

For any MCP-compatible client you can write a thin HTTP wrapper that targets the APAI Gateway:

import httpx

class APAIMCPClient:
    def __init__(self, gateway_url: str, token: str):
        self.client = httpx.Client(
            base_url=gateway_url,
            headers={"Authorization": f"Bearer {token}"},
        )

    def list_tools(self):
        return self.client.get("/tools").json()

    def call_tool(self, tool_name: str, arguments: dict):
        return self.client.post(f"/tools/{tool_name}/call", json=arguments).json()

Always prefer the Gateway path for production and shared environments. Direct MCP server connections are fine for local development.

What integrates with what

For a quick view of which AI agent platforms APAI tracks today, see /catalog (external sources we track but do not host). For the install mode each platform uses, see deployment patterns.