Why We Built Our Own Context Layer for AI Agents
Build with us
We're building the payments and billing platform for SaaS, AI, and digital products. Come help us ship.
View Open PositionsOur docs were stale, scattered across endpoints and our users were getting wrong answers. So we fixed it.
At Dodo Payments, we’ve always believed in making payments simple. That’s why we built Sentra, an AI agent that takes you from a prompt to a production ready payments setup.
People loved it. They started using it to add payments to their apps and for a while everything felt smooth.
It Worked, Until It Did Not
Under the hood, Sentra depends heavily on context: our docs, repos, APIs. We were using Context7 for this and initially, it worked fine.
But as our codebase and docs started moving faster, things began to break in subtle ways.
Docs were not always in sync. The agent would sometimes pick up outdated context, hallucinate APIs or generate code that no longer matched reality.
Even worse, our documentation and repos were spread across multiple endpoints. So for every user query, we had to first make an extra AI call just to figure out which docs to search, and then another one to actually retrieve the context.
This made the system:
- Slower
- Inaccurate
- And unreliable
The Real Problem We Were Actually Fighting
At some point we realized we were not fighting a tooling issue. We were fighting context drift.
Docs change. New features get added. APIs evolve.
But the agent was always slightly behind reality. Reindexing was manual and things went out of date without anyone noticing.
Context is not a feature. Context is infrastructure.
If this layer is unreliable, everything built on top of it becomes unreliable. You cannot fix this with better prompts or better models.
That’s when we decided to fix this.
Building ContextMCP
We searched open source alternatives to Context7 that we could self host but unfortunately (or fortunately?) we could not find anything.
So we decided to build it ourselves.
That is how ContextMCP was born. A self-hosted alternative to Context7 built for real, fast-moving docs.
ContextMCP’s parser understands code blocks, headers and semantic boundaries to keep context intact.
Then we replaced Context7 with our newly created system in Sentra and the impact was immediate. Responses became faster and more accurate.
We set up automatic reindexing so context stays fresh without anyone needing to think about it.
This also helped us remove the extra AI call we were making to route queries to the right docs since now everything sits in a single unified index.

Open Source
When we realised that this has in fact made our workflow better, we decided to open source it for everyone.
We made it completely open source and self hostable.
Getting started is intentionally simple, you run: npx contextmcp init, add your environment variables, edit config.yaml and your entire context pipeline is up and running.
We also built this MCP native from day one so it works out of the box with Cursor, Windsurf, Claude Desktop and any other MCP client.
For places where MCP is not suitable, it also exposes a simple REST API so your custom agents and workflows can use the same context layer.
Both the remote MCP server and the REST API run on Cloudflare Workers and are ready to deploy when you initialize ContextMCP.
Final Thoughts
Everyone is racing to build better agents. Better models, better prompts, better reasoning loops.
But none of that matters if your agent is reasoning about a world that no longer exists.
You can get started with ContextMCP here.
If you have any improvement suggestions or if you want to contribute, you can checkout the ContextMCP GitHub Repository
If you have any doubts or questions regarding ContextMCP, you can join our Discord Community, we are quite active there.
Build with us
We're building the payments and billing platform for SaaS, AI, and digital products. Come help us ship.
View Open Positions