Connect OpenAI Chat Completions
canopy.openai.tools() returns the exact [{ type: "function", function: { ... } }] shape OpenAI expects. canopy.openai.dispatch(toolCalls) runs every Canopy tool call from an assistant message and returns tool messages already shaped for the next turn — no manual lookup-and-execute loop.
npx @canopy-ai/sdk connect in your project root. It opens a consent page in your browser, then writes credentials to ~/.config/canopy/credentials and merges a canopy MCP server entry into any installed Claude Code, Cursor, Claude Desktop, Windsurf, Cline, VS Code, or Zed. Skip Steps 2 and 4 below.Step 1 — Connect your agent in the dashboard
Canopy is bring-your-own-agent. This step doesn't create the agent itself — you've already built that, or are about to. It registers a Canopy-side record that pairs your agent with a spending policy and gives you an agt_… ID to use in your code.
Sign in at trycanopy.ai and go to Agents → Connect agent. Give the agent a name and pick (or create) a policy. The policy controls the spend cap, recipient allowlist, and approval threshold every payment from this agent will be evaluated against.
Step 2 — Copy your credentials
You need two values in your code:
- Org API key (
ak_live_…orak_test_…) — from Settings → API Keys. Copy it the moment you create it; the plaintext is shown only once. - Agent ID (
agt_…) — from the agent's detail page in /dashboard/agents.
Step 3 — Install the package
npm install @canopy-ai/sdkStep 4 — Set your environment variables
CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
CANOPY_AGENT_ID=agt_xxxxxxxxUse a .env file locally and your platform's secret manager in production. Never commit credentials.
Step 5 — Connect in your agent code
Paste the snippet below into your existing OpenAI agent.
// 1. Add to your .env:
// CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
// 2. In your agent code:
import { Canopy } from '@canopy-ai/sdk';
import OpenAI from 'openai';
const canopy = new Canopy({
apiKey: process.env.CANOPY_API_KEY,
agentId: 'agt_xxxxxxxx',
});
const openai = new OpenAI();
const messages: OpenAI.ChatCompletionMessageParam[] = [
{ role: 'user', content: 'pay 5 cents to 0x1234...' },
];
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
tools: canopy.openai.tools(),
});
// Execute any tool_calls and feed the tool messages back next turn:
const toolMessages = await canopy.openai.dispatch(
completion.choices[0].message.tool_calls,
);
if (toolMessages.length) {
messages.push(completion.choices[0].message);
messages.push(...toolMessages);
}Step 6 — Verify the connection
Run your agent once. As soon as Canopy receives a request from it, the dashboard flips the agent to connected and shows the first event captured. If nothing happens after a minute, see Troubleshooting.
How dispatch behaves
- Skips non-Canopy tool calls — your host loop dispatches user-defined tools; Canopy only handles
canopy_pay,canopy_check_url,canopy_discover_services,canopy_approve,canopy_deny. - Embeds errors as JSON — if a tool throws, the tool message content becomes
{"error": "..."}so the LLM can react instead of crashing the loop. - Pending approvals propagate intact — when
canopy_payreturnspending_approval, the rich fields (recipientName,amountUsd,expiresAt,chatApprovalEnabled) land in the tool message. The LLM can ask the user inline and callcanopy_approve/canopy_denynext turn.
Where to go next
- Payment outcomes — what the LLM gets when a payment is allowed, pending, or denied
- Connect OpenAI Agents SDK — different API surface (Python)
- TypeScript SDK reference