Connect OpenAI Agents SDK
from canopy_ai.openai_agents import to_openai_agents_tools returns ready-to-use FunctionTool instances for all five canonical Canopy tools. Pass them straight to agents.Agent(tools=...) — no @function_tool decorators, no manual signatures.
npx @canopy-ai/sdk connect in your project root. It opens a consent page in your browser, then writes credentials to ~/.config/canopy/credentials and merges a canopy MCP server entry into any installed Claude Code, Cursor, Claude Desktop, Windsurf, Cline, VS Code, or Zed. Skip Steps 2 and 4 below.Step 1 — Connect your agent in the dashboard
Canopy is bring-your-own-agent. This step doesn't create the agent itself — you've already built that, or are about to. It registers a Canopy-side record that pairs your agent with a spending policy and gives you an agt_… ID to use in your code.
Sign in at trycanopy.ai and go to Agents → Connect agent. Give the agent a name and pick (or create) a policy. The policy controls the spend cap, recipient allowlist, and approval threshold every payment from this agent will be evaluated against.
Step 2 — Copy your credentials
You need two values in your code:
- Org API key (
ak_live_…orak_test_…) — from Settings → API Keys. Copy it the moment you create it; the plaintext is shown only once. - Agent ID (
agt_…) — from the agent's detail page in /dashboard/agents.
Step 3 — Install the package
pip install 'canopy-ai[openai-agents]'Step 4 — Set your environment variables
CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
CANOPY_AGENT_ID=agt_xxxxxxxxUse a .env file locally and your platform's secret manager in production. Never commit credentials.
Step 5 — Connect in your agent code
Paste the snippet below into your existing OpenAI Agents agent.
# 1. Add to your .env:
# CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
# 2. In your agent code:
import os
from agents import Agent, Runner
from canopy_ai import Canopy
from canopy_ai.openai_agents import to_openai_agents_tools
canopy = Canopy(
api_key=os.environ["CANOPY_API_KEY"],
agent_id="agt_xxxxxxxx",
)
agent = Agent(
name="Treasurer",
instructions="Pay recipients when asked.",
tools=to_openai_agents_tools(canopy),
)
result = Runner.run_sync(agent, "Send 10 cents to 0x1234...")
print(result.final_output)Step 6 — Verify the connection
Run your agent once. As soon as Canopy receives a request from it, the dashboard flips the agent to connected and shows the first event captured. If nothing happens after a minute, see Troubleshooting.
Install
canopy_ai.openai_agents requires the optional dep openai-agents. Install with pip install 'canopy-ai[openai-agents]'.
Why the helper instead of @function_tool?
The @function_tool decorator inspects each function's signature and docstring to build the tool schema. That works fine for one or two tools but gets verbose at five — and the canonical tool descriptions Canopy ships are already optimized for LLM tool selection. to_openai_agents_tools reuses them directly via FunctionTool(name, description, params_json_schema, on_invoke_tool).
Where to go next
- Discover services — let agents find paid APIs to call
- Python SDK reference —
to_openai_agents_toolsand the rest of the SDK surface