# MCP Payments: How to Connect AI Agents to Payment Infrastructure

> Learn how to connect AI agents to payment APIs using the Model Context Protocol (MCP). Set up Dodo Payments' MCP server with Code Mode architecture for creating payments, managing subscriptions, and handling refunds through AI conversation.
- **Author**: Ayush Agarwal
- **Published**: 2026-04-10
- **Category**: AI, Payments, Engineering
- **URL**: https://dodopayments.com/blogs/mcp-payments-ai-agents

---

AI agents are writing code, deploying infrastructure, and managing entire product lifecycles. But most of them still cannot process a payment. The gap between what AI agents can build and what they can bill for is the bottleneck holding back [autonomous AI commerce](https://dodopayments.com/blogs/monetize-ai-agent).

The Model Context Protocol (MCP) closes that gap. MCP is an open standard that gives AI agents structured access to external tools and APIs. When you connect an MCP server to a payment platform, your AI agent can create payments, manage subscriptions, issue refunds, and track usage - all through natural language conversation.

This guide covers how MCP payments work, how to set up the Dodo Payments MCP server in your AI development environment, and what becomes possible when your AI agent has direct access to [payment infrastructure](https://dodopayments.com/blogs/payments-architecture-saas).

## What is MCP and why it matters for payments

The Model Context Protocol is an open standard created by Anthropic that defines how AI applications connect to external data sources and APIs. Think of MCP as a USB-C port for AI agents: a universal interface that lets any compatible AI client talk to any compatible server, regardless of the underlying implementation.

Before MCP, connecting an AI agent to an external API meant writing custom integration code for every tool-LLM combination. If you wanted Claude to access your payment API, you wrote a Claude plugin. If you wanted Cursor to do the same, you wrote a different integration. Every new AI client or API endpoint multiplied the integration work.

MCP standardizes this. An MCP server exposes capabilities through a consistent protocol. Any MCP-compatible client - Claude Desktop, Cursor, Claude Code, Windsurf, VS Code, Zed, or OpenCode - can connect to any MCP server with a single JSON configuration. One server, every client.

For payments specifically, MCP solves three problems:

- **Structured access**: AI agents get typed, documented interfaces to payment operations instead of parsing raw REST responses
- **Security**: API keys are injected server-side and never exposed to the LLM's context window
- **Composability**: Agents can chain multiple payment operations (create customer, attach payment method, start subscription) in a single execution

> We noticed that AI agents were great at writing payment integration code, but terrible at actually executing payment operations. MCP bridges that gap by giving agents the same SDK access that human developers use, just through a protocol they natively understand.
>
> - Ayush Agarwal, Co-founder & CPTO at Dodo Payments

## Traditional payment integration vs. MCP-enabled

To understand why MCP matters, compare the two approaches to building a payment workflow with an AI agent.

### The traditional approach

Without MCP, connecting an AI agent to a payment API requires manual glue code:

1. Human developer reads the payment API documentation
2. Human writes wrapper functions for each endpoint (create payment, list subscriptions, issue refund)
3. Human registers these functions as tools in the AI agent's framework
4. Agent calls one tool at a time, with a full LLM inference round-trip per call
5. Human maintains the wrappers as the API evolves

If the API has 50 endpoints, you write 50 tool definitions. Each definition consumes tokens in the agent's context window. Anthropic found that tool definitions alone could consume over 100K tokens before the conversation starts. And every API update risks breaking your hand-rolled wrappers.

### The MCP approach

With MCP, the connection is declarative:

1. Add a JSON configuration block pointing to the MCP server
2. Agent automatically discovers available capabilities
3. Agent writes and executes code against the SDK in a sandboxed environment
4. API keys stay server-side, never touching the LLM context

No wrapper code. No per-endpoint tool definitions. No maintenance burden when the API ships new features.

```mermaid
flowchart LR
    subgraph traditional ["Traditional Approach"]
        A[AI Agent] -->|"tool call 1"| B[Custom Wrapper]
        A -->|"tool call 2"| B
        A -->|"tool call N"| B
        B --> C[Payment API]
    end
    subgraph mcp ["MCP Approach"]
        D[AI Agent] -->|"writes code"| E[MCP Server]
        E -->|"executes in sandbox"| F[Payment SDK]
        F --> G[Payment API]
    end
```

The difference becomes stark at scale. A traditional integration with 50 API endpoints requires 50 tool definitions consuming ~55K tokens of context. The MCP approach uses exactly 2 tools regardless of API size, consuming roughly 1,000 tokens.

## Dodo Payments MCP Server: Code Mode architecture

The [Dodo Payments MCP Server](https://docs.dodopayments.com/developer-resources/mcp-server) uses an architecture called Code Mode, pioneered by Anthropic and Cloudflare. Instead of exposing one tool per API endpoint, Code Mode collapses the entire payment API surface into exactly two tools:

1. **Docs Search Tool** - Queries documentation about the Dodo Payments API and SDK to understand available operations and parameters
2. **Code Execution Tool** - Writes and executes TypeScript code against the Dodo Payments SDK in a secure sandbox environment

This is what makes it different from every other payment MCP implementation. Most MCP servers for payment APIs take the naive approach: one tool for `createPayment`, another for `getSubscription`, another for `issueRefund`, and so on. That approach breaks at scale.

### Why Code Mode is better

**LLMs are better at writing code than calling tools.** LLMs have been trained on millions of lines of real-world TypeScript. Tool-calling schemas are often based on synthetic examples. When you let an agent write `client.payments.create({...})` instead of filling a JSON tool schema, accuracy goes up.

**Context window stays clean.** Two tool definitions consume about 1,000 tokens. Fifty tool definitions consume 55,000+ tokens. Code Mode preserves 95% of the context window for actual conversation.

**Multi-step operations run in one shot.** In traditional tool-calling, listing all subscriptions and then canceling specific ones requires multiple LLM round-trips. In Code Mode, the agent writes a single script that fetches, filters, and acts - all in one sandbox execution.

**Security is built in.** API keys are injected server-side into the sandbox. They never appear in tool parameters, never enter the LLM's context, and cannot be leaked through prompt injection. The sandbox has no filesystem or network access beyond the Dodo Payments API.

Cloudflare validated this approach by collapsing over 2,500 API endpoints into just 2 Code Mode tools. Anthropic measured a 37% reduction in token usage and improved knowledge retrieval accuracy from 25.6% to 28.5% compared to traditional tool-calling.

## Setting up the Dodo Payments MCP server

Connect your AI development environment to the Dodo Payments MCP server with one of these configurations. The remote server is recommended - it requires no local installation and handles authentication via OAuth.

### Cursor

Add to `~/.cursor/mcp.json`:

```json
{
  "mcpServers": {
    "dodopayments": {
      "command": "npx",
      "args": ["-y", "mcp-remote@latest", "https://mcp.dodopayments.com/sse"]
    }
  }
}
```

### Claude Desktop

Add to your Claude Desktop configuration file (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):

```json
{
  "mcpServers": {
    "dodopayments": {
      "command": "npx",
      "args": ["-y", "mcp-remote@latest", "https://mcp.dodopayments.com/sse"]
    }
  }
}
```

### Claude Code

Run in your terminal:

```bash
claude mcp add dodopayments -- npx -y mcp-remote@latest https://mcp.dodopayments.com/sse
```

### Windsurf

Add to `~/.codeium/windsurf/mcp_config.json`:

```json
{
  "mcpServers": {
    "dodopayments": {
      "command": "npx",
      "args": ["-y", "mcp-remote@latest", "https://mcp.dodopayments.com/sse"]
    }
  }
}
```

### Local installation (alternative)

If you prefer running the MCP server locally instead of using the remote server:

```json
{
  "mcpServers": {
    "dodopayments": {
      "command": "npx",
      "args": ["-y", "dodopayments-mcp@latest"],
      "env": {
        "DODO_PAYMENTS_API_KEY": "dodo_test_...",
        "DODO_PAYMENTS_WEBHOOK_KEY": "your_webhook_key",
        "DODO_PAYMENTS_ENVIRONMENT": "live_mode"
      }
    }
  }
}
```

The remote server requires Node.js 18+ and uses OAuth for authentication. On first connection, you will be prompted to enter your API key and select your environment (test or live).

## What you can do: payments, subscriptions, and refunds via AI conversation

Once the MCP server is connected, your AI agent has full access to the Dodo Payments API through the [TypeScript SDK](https://docs.dodopayments.com/developer-resources/dodo-payments-sdks). Here are concrete examples of what becomes possible.

### Create a one-time payment

Ask your AI agent: "Create a payment link for our Pro plan at $49 for customer jane@company.com"

The agent writes and executes this in the Code Mode sandbox:

```typescript
import DodoPayments from "dodopayments";

const client = new DodoPayments();

const payment = await client.payments.create({
  payment_link: true,
  billing: {
    city: "San Francisco",
    country: "US",
    state: "CA",
    street: "123 Market St",
    zipcode: 94105,
  },
  customer: {
    email: "jane@company.com",
    name: "Jane Chen",
  },
  product_cart: [{ product_id: "pdt_pro_plan", quantity: 1 }],
});

console.log("Payment link:", payment.payment_link);
```

### Manage subscriptions

Ask: "Show me all active subscriptions that are up for renewal in the next 7 days"

The agent writes a script that lists subscriptions, filters by renewal date, and returns a formatted summary - all in a single sandbox execution. No multiple round-trips. No hand-rolled tool definitions.

### Handle refunds

Ask: "Refund the last payment from customer john@startup.io"

The agent searches for the customer, finds their most recent payment, and issues a refund. Three API calls, one script, one execution.

This is the real advantage of Code Mode: complex, multi-step billing workflows that would require 5-10 separate tool calls in a traditional MCP setup run as a single TypeScript script. Your agent composes operations the same way a human developer would - by writing code.

For AI-native SaaS products that need [usage-based billing](https://dodopayments.com/blogs/usage-based-billing-saas), the MCP server also supports ingesting usage events and managing [metered billing](https://docs.dodopayments.com/features/usage-based-billing/introduction) programmatically. Your agent can track API calls, compute tokens, or any custom metric and bill accordingly.

## Dodo Knowledge MCP: documentation search for AI agents

In addition to the MCP server for executing API operations, Dodo Payments provides a separate [Knowledge MCP](https://docs.dodopayments.com/developer-resources/mcp-server) server. This is a semantic search server that gives AI assistants instant access to the full Dodo Payments documentation and knowledge base.

### What Knowledge MCP provides

- **Semantic documentation search** - Find relevant docs using natural language queries instead of keyword matching
- **Contextual answers** - AI assistants get accurate, up-to-date information about Dodo Payments features
- **Zero setup** - No API keys or local installation required. Connect and start querying

### Setup

Add to your MCP client configuration alongside the main MCP server:

```json
{
  "mcpServers": {
    "dodo-knowledge": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote@latest",
        "https://knowledge.dodopayments.com/mcp"
      ]
    },
    "dodopayments": {
      "command": "npx",
      "args": ["-y", "mcp-remote@latest", "https://mcp.dodopayments.com/sse"]
    }
  }
}
```

### How both servers work together

| Server                 | Purpose              | Example queries                                                    |
| :--------------------- | :------------------- | :----------------------------------------------------------------- |
| **Dodo Knowledge MCP** | Documentation search | "How do I handle webhooks?", "What payment methods are supported?" |
| **Dodo Payments MCP**  | API operations       | Create payments, manage subscriptions, handle refunds              |

With both servers configured, your AI assistant can first search documentation to understand how to implement a feature, then execute the actual API calls. The Knowledge MCP answers "how do I do this?" and the Payments MCP executes it. This mirrors how a human developer works: read the docs, then write the code.

## Agent Skills: procedural knowledge for AI assistants

MCP gives agents access to APIs. [Agent Skills](https://docs.dodopayments.com/developer-resources/agent-skills) teach them how to use those APIs correctly.

Skills are reusable capability packages that provide procedural knowledge, code templates, best practices, and context awareness for implementing specific Dodo Payments features. Think of skills as plugins for your AI assistant that teach it how to implement payment features correctly.

### Available skills

| Skill                        | What it teaches                                                                                                                       |
| :--------------------------- | :------------------------------------------------------------------------------------------------------------------------------------ |
| **dodo-best-practices**      | Comprehensive integration guide with security and error handling patterns                                                             |
| **webhook-integration**      | Setting up and handling webhooks for payment events                                                                                   |
| **subscription-integration** | Implementing [subscription billing](https://dodopayments.com/blogs/set-up-subscription-billing-afternoon) flows with trials and tiers |
| **checkout-integration**     | Creating checkout sessions and payment flows                                                                                          |
| **usage-based-billing**      | Implementing [metered billing](https://dodopayments.com/blogs/implement-usage-based-billing) with events and meters                   |
| **billing-sdk**              | Using BillingSDK React components for pricing pages                                                                                   |
| **license-keys**             | Managing [license keys](https://dodopayments.com/blogs/set-up-license-key-system) for digital products                                |
| **credit-based-billing**     | Implementing credit entitlements and metered credit deduction                                                                         |

### Installing skills

```bash
# Install all Dodo Payments skills
npx skills add dodopayments/skills

# Or install individual skills
npx skills add dodopayments/skills/dodo-payments/webhook-integration
npx skills add dodopayments/skills/dodo-payments/subscription-integration
```

For Claude Code specifically:

```bash
/plugin marketplace add dodopayments/skills
/plugin install checkout-integration
```

### Skills + MCP + Knowledge: the full stack

The three layers work together:

1. **Agent Skills** provide the procedural knowledge: "Here is the correct pattern for implementing a checkout flow with webhooks"
2. **Dodo Knowledge MCP** provides the reference material: "Here is the webhook payload format and the list of event types"
3. **Dodo Payments MCP** executes the operations: "Here is the code running in the sandbox that creates the checkout session"

When you tell your AI agent "add subscription billing to my Next.js app," the agent loads the subscription-integration skill for the implementation pattern, queries Knowledge MCP for API specifics, and can test the integration using the Payments MCP server. This is the difference between an agent that guesses at implementation patterns and one that follows tested, production-ready workflows.

## The future: autonomous AI commerce

MCP payments represent a structural shift in how software handles money. Today, you ask an AI agent to create a payment link and it executes the operation. Tomorrow, AI agents will handle entire billing lifecycles autonomously.

### What is already possible

- AI coding assistants like Cursor and Claude Code can set up complete [payment integrations](https://dodopayments.com/blogs/add-payments-nextjs-app) by combining MCP with agent skills
- [Vibe-coded applications](https://dodopayments.com/blogs/vibe-coding) can ship with billing built in from day one, without the developer manually writing payment code
- Support agents can issue refunds, adjust subscriptions, and resolve billing disputes through conversational interfaces

### What is coming

- **Agent-to-agent commerce**: AI agents purchasing compute, data, and services from other agents using [adaptive pricing models](https://dodopayments.com/blogs/adaptive-pricing-ai-native-startups)
- **Autonomous billing optimization**: Agents monitoring conversion funnels, adjusting pricing, and running A/B tests on checkout flows without human intervention
- **Real-time usage settlement**: As [AI billing platforms](https://dodopayments.com/blogs/ai-billing-platforms) mature, agents will settle micro-transactions in real time based on actual consumption

> The next generation of AI SaaS products will not have a billing page that humans configure. They will have billing agents that optimize revenue autonomously. MCP is the protocol layer that makes this possible.
>
> - Ayush Agarwal, Co-founder & CPTO at Dodo Payments

The payment infrastructure that supports this future needs to be agent-native from the ground up. That means MCP support, Code Mode architecture, usage-based billing primitives, and structured documentation that agents can search and reason about. [Dodo Payments](https://dodopayments.com) is building exactly this stack.

If you are building AI agents, AI-native SaaS, or [vibe-coded applications](https://dodopayments.com/blogs/monetize-vibe-coded-apps) that need to handle money, the MCP server is the fastest path from prompt to payment. Set it up in under a minute, and let your agent handle the rest.

## FAQ

### What is MCP payments and how does it work?

MCP payments refers to using the Model Context Protocol to connect AI agents to payment infrastructure. The AI agent communicates with an MCP server that provides structured access to payment APIs. With Dodo Payments' Code Mode architecture, the agent writes TypeScript code against the SDK that executes in a secure sandbox, enabling operations like creating payments, managing subscriptions, and issuing refunds through natural language conversation.

### Do I need to write custom code to connect my AI agent to Dodo Payments?

No. The Dodo Payments MCP server connects through a single JSON configuration block in your AI client (Cursor, Claude Desktop, Claude Code, or Windsurf). The remote server option requires no local installation and no API keys in the config file. Authentication happens via OAuth on first connection.

### Is the MCP server secure for production payment operations?

Yes. Code Mode provides multiple security layers. API keys are injected server-side and never exposed to the LLM context. Code executes in an isolated sandbox with no filesystem or network access beyond the Dodo Payments API. Only authorized SDK methods are available. You can use test mode keys during development and switch to live mode for production.

### What is the difference between the MCP server and agent skills?

The MCP server gives your AI agent the ability to execute payment API operations. Agent skills give it the knowledge of how to implement payment features correctly. The MCP server is the execution layer ("run this code"), while skills are the knowledge layer ("here is the correct pattern for building a checkout flow"). For the best results, use both together with the Knowledge MCP for documentation search.

### Can AI agents handle usage-based billing through MCP?

Yes. The Dodo Payments MCP server supports the full [usage-based billing](https://docs.dodopayments.com/features/usage-based-billing/introduction) API, including ingesting usage events, querying meters, and managing billing cycles. AI agents can programmatically track API calls, compute tokens, storage usage, or any custom metric and bill customers based on actual consumption.

## Start building with MCP payments

MCP payments are not a theoretical concept. The infrastructure exists today. The Dodo Payments MCP server, Knowledge MCP, and agent skills give AI agents a complete stack for handling money programmatically.

Get started:

1. [Set up the MCP server](https://docs.dodopayments.com/developer-resources/mcp-server) in your AI client
2. [Install agent skills](https://docs.dodopayments.com/developer-resources/agent-skills) for guided implementation patterns
3. Connect the [Knowledge MCP](https://docs.dodopayments.com/developer-resources/mcp-server) for documentation search
4. Explore the [API reference](https://docs.dodopayments.com/api-reference/introduction) for the full list of supported operations

If you are evaluating payment APIs for AI agent development, check our detailed [comparison of payment APIs for AI agents](https://dodopayments.com/blogs/best-payment-api-ai-agents). For pricing details, see [dodopayments.com/pricing](https://dodopayments.com/pricing).
---
- [More AI articles](https://dodopayments.com/blogs/category/ai)
- [All articles](https://dodopayments.com/blogs)