Skip to content
Dodo Payments
Mar 26, 2026 12 min read

Why We Nuked Our Framer Site and Rebuilt in Astro

Ayush Agarwal
Ayush Agarwal
Co-founder & CPTO
Banner image for Why We Nuked Our Framer Site and Rebuilt in Astro

We just rebuilt dodopayments.com from scratch. Migrated entirely off Framer. Rewrote the whole thing in Astro.

The old site looked fine. Visually, it did its job. But under the hood it was slow - heavy JavaScript bundles, unnecessary client-side rendering for what is fundamentally a static marketing site. PageSpeed scores were mid. Every time we wanted to change something, it felt like fighting the tool instead of building.

So we nuked it.

This is the story of how we rebuilt our marketing site with Astro, Decap CMS, and Cloudflare Workers - and why we used the migration as an opportunity to architect SEO, AEO, and GEO as build-time infrastructure rather than an afterthought.

The Framer Problem

We’re Dodo Payments, a payment infrastructure company. Our website started on Framer because it was fast to ship. Drag, drop, publish. For an early-stage company that needed a landing page yesterday, it was the right call.

But as the site grew - blog posts, glossary entries, comparison pages, product pages, legal docs - the cracks showed.

What Framer Gave Us vs What We Needed
═══════════════════════════════════════════════════════════════════

                    Framer               What we needed
                    ──────               ──────────────
JS bundle           Heavy (~200KB+)      Zero by default
Rendering           Client-side          Static HTML
PageSpeed           Mid (60-75)          95+
SEO control         Limited              Full structured data
Content model       Visual editor        Git-based, typed schemas
Build pipeline      Opaque               Transparent, testable
Vendor lock-in      Complete             None

Three problems kept getting worse:

Crawlers couldn’t see our content. Framer renders pages with JavaScript. Google’s crawler can handle that - eventually. But “eventually” means your new blog post might not get indexed for days. And AI crawlers (ChatGPT, Perplexity) are even less forgiving. If your content depends on JS to render, you’re invisible to a growing chunk of search.

We had no control over structured data. JSON-LD for rich results - FAQ dropdowns, how-to steps, breadcrumbs - requires precise, per-page schema markup. Framer gives you no mechanism for this. Every content type needs a different schema, and we had no way to generate it automatically.

Every change was a fight. Want to add a new section to every blog post? Redesign the pricing page? Update the footer across all pages? In Framer, each change meant clicking through dozens of pages. In code, it’s a single template change that propagates everywhere.

Why Astro Won

We evaluated Next.js, Nuxt, SvelteKit, and Remix. Astro won for one reason: a marketing website should ship zero JavaScript by default.

Astro’s “islands architecture” means every page renders to pure static HTML at build time. Interactive components - pricing calculators, contact forms - hydrate only where explicitly needed. Everything else is plain HTML and CSS.

The result: a site that loads almost instantly. Lighthouse scores through the roof. SEO actually works because crawlers aren’t waiting for JavaScript to render content - there’s nothing to wait for.

Our config is minimal: static output, Cloudflare Workers adapter for the few server-side routes (CMS OAuth, dynamic config), Tailwind CSS v4 via the Vite plugin, and React for the handful of interactive islands that genuinely need client-side state.

The Content Architecture

With Astro, content lives as markdown files in Git. Each content type gets a Zod schema that validates frontmatter at build time:

const blog = defineCollection({
  loader: glob({ base: "./src/content/blog", pattern: "**/*.{md,mdx}" }),
  schema: ({ image }) =>
    z.object({
      title: z.string(),
      description: z.string().optional(),
      banner: image().optional(), // Astro image pipeline
      author: z.string().optional(),
      publishedDate: z.date(),
      modifiedDate: z.date().optional(),
      category: z.union([z.string(), z.array(z.string())]).optional(),
    }),
});

If a content editor publishes a post with a malformed date or missing title, the build fails immediately. In Framer, that kind of error would go live silently.

The image() helper is worth mentioning - it processes images through Astro’s pipeline at build time, generating optimized formats and srcset attributes. No manual image optimization, no third-party image CDN.

Decap CMS: Git-Based Editing Without the Git

Not everyone on our team writes markdown in a code editor. Decap CMS gives content editors a visual interface that commits directly to our GitHub repository. No database, no sync pipeline. Every edit is a git commit with full version history.

The interesting engineering decision: instead of a static config.yml, we generate CMS config dynamically via a Cloudflare Workers endpoint. Branch and OAuth URLs are injected from Workers secrets, so the same admin panel works on staging and production without any hardcoded environment config.

Structured Data as Build-Time Infrastructure

This is where the migration paid off most. In Framer, we had zero structured data. In Astro, we built a schema factory system that generates JSON-LD for every page automatically.

Schema Generation Architecture
═══════════════════════════════════════════════════════════════════

  Content (Markdown)          Schema Factory            Rendered Page
  ┌──────────────────┐       ┌──────────────────┐      ┌───────────────┐
  │ frontmatter:     │       │ createBlogPost-  │      │ <script       │
  │   title, date,   │──────►│ ingSchema()      │─────►│  type=ld+json>│
  │   author, desc   │       │                  │      │  {@graph: [   │
  └──────────────────┘       │ createFAQPage-   │      │    BlogPost,  │
  ┌──────────────────┐       │ Schema()         │      │    FAQPage,   │
  │ body:            │──────►│                  │      │    HowTo      │
  │   ## FAQ         │       │ createHowTo-     │      │  ]}           │
  │   ### Question?  │       │ Schema()         │      │ </script>     │
  │   ## Step 1:     │       └──────────────────┘      └───────────────┘
  └──────────────────┘
         │                          │
    Markdown parsing           Auto-extraction
    at build time              from content body

Every page gets two base schemas - Organization and WebSite - and then page-specific schemas layer on top via typed factory functions. Each content type has its own factory that takes exactly the parameters it needs and produces valid JSON-LD.

All multi-schema pages use the @graph pattern - one <script type="application/ld+json"> block with a @graph array instead of multiple separate blocks. This creates a connected entity graph that search engines can traverse.

Automatic FAQ and HowTo Extraction

The most impactful part: when a blog post has a ## FAQ section, the build process automatically extracts question-answer pairs from ### Question? headings and generates a FAQPage schema. When a post titled “How to…” has ## Step N: headings, it generates a HowTo schema.

Automatic Extraction Flow
═══════════════════════════════════════════════════════════════════

Blog post body (markdown)

      ├── Scan for "## FAQ" heading
      │       │
      │       ▼
      │   Extract ### Question? / paragraph pairs
      │       │
      │       ▼
      │   Strip markdown formatting
      │       │
      │       ▼
      │   createFAQPageSchema(pairs) ──► FAQPage JSON-LD

      ├── Scan for "## Step N:" headings
      │       │
      │       ▼
      │   Check title contains "how to"
      │       │
      │       ▼
      │   createHowToSchema(steps) ──► HowTo JSON-LD

      └── Always: createBlogPostingSchema() ──► BlogPosting JSON-LD

Content writers never touch structured data. They write natural markdown, and the build system handles the rest. Every new FAQ section automatically becomes eligible for Google’s FAQ rich results - no one has to think about it.

Git Dates for Accurate Sitemaps

Google uses lastmod in sitemaps to decide crawl priority. Most CMSes use the frontmatter date, which only updates when someone manually changes it. A post can be edited fifty times with the sitemap still showing the original publish date.

We solved this by reading git commit history at build time:

Sitemap lastmod Resolution
═══════════════════════════════════════════════════════════════════

Priority 1: Git commit dates (full clone)
──────────────────────────────────────────
  git log → Map<file, lastCommitDate>
  URL → source files → latest git date

Priority 2: Frontmatter dates (shallow clone fallback)
──────────────────────────────────────────────────────
  If all files share the same git date → shallow clone
  → parse modifiedDate / publishedDate from frontmatter

The build script runs git fetch --unshallow in CI to recover full history before falling back.

GEO: Making Content Citable by AI

This was the other reason to leave Framer. Generative Engine Optimization - making your content citable by ChatGPT, Perplexity, Gemini, Claude - requires three things Framer couldn’t give us:

1. llms.txt - We publish machine-readable files at /llms.txt and /llms-full.txt with structured company descriptions and a “What We Do Not Do” disambiguation section. That last part prevents AI hallucinations - explicitly stating what your company is not stops AI from conflating you with unrelated products.

2. AI crawler access - Our robots.txt individually lists and allows every major AI crawler: GPTBot, ChatGPT-User, PerplexityBot, ClaudeBot, Anthropic-ai, Google-Extended, Applebot-Extended, and cohere-ai. Each gets its own User-agent block with explicit Allow: /.

3. Speakable specification - We use SpeakableSpecification schemas on content where answer engines are most likely to extract answers. This tells AI systems which parts of a page contain the “answer” content.

None of this is possible when your site is rendered client-side by a no-code builder.

Testing Search Infrastructure

Early in the migration, a regex refactor silently broke our FAQ extraction for several blog posts. The structured data stopped generating. No build error. No one noticed until someone checked Google Search Console.

That incident pushed us to test search infrastructure the same way we test application code:

AEO Test Suite
═══════════════════════════════════════════════════════════════════

Source Tests (no build, fast):
──────────────────────────────
  ✓ Schema factory exports all required functions
  ✓ JSON-LD component uses @graph pattern
  ✓ Layout imports base schemas
  ✓ robots.txt allows required AI bots
  ✓ llms.txt has all required sections

Build Tests (requires rendered output):
───────────────────────────────────────
  ✓ Every page section has page-specific JSON-LD
  ✓ Required schema fields present per type
  ✓ FAQ sections produce valid FAQPage schemas
  ✓ Content quality meets minimum thresholds

If someone breaks FAQ extraction again, the build fails. No more silent regressions.

What We Got Wrong

Underestimating the Migration Effort

We thought “it’s just a marketing site, how hard can it be?” Harder than expected. Framer doesn’t export clean content. Every page needed manual recreation. Every piece of copy needed extraction. The visual design was a starting point, but translating Framer’s auto-layout system to Tailwind CSS required rethinking the responsive behavior of nearly every section.

Budget more time than you think, especially for content migration.

Over-Abstracting the Schema System Early

Our first version of the schema factory tried to handle every Schema.org type through a single generic function. “Flexible” in theory, unusable in practice - every caller had to pass twenty optional parameters. We rewrote it as individual typed functions per content type. More functions, but impossible to misuse.

Not Testing Structured Data from Day One

We shipped dozens of pages before adding the test suite. When we finally wrote the tests, we found several content types missing page-specific schemas entirely. If we’d started with the tests, those bugs wouldn’t have reached production.

Ignoring Shallow Clones

Our CI pipeline used shallow clones by default. Every sitemap entry had the same lastmod date - useless. Google was getting bad date signals for weeks before we noticed.

The Results

Before (Framer) vs After (Astro)
═══════════════════════════════════════════════════════════════════

                        Framer           Astro
                        ──────           ─────
Lighthouse Performance  60-75            95+
JavaScript shipped      ~200KB+          0 KB (default)
Time to first paint     Slow (JS render) Instant (static HTML)
Structured data         None             Auto-generated per page
Rich results eligible   No               Yes (FAQ, HowTo, etc.)
AI crawler indexable    Partial (JS)     Full (static HTML)
Content in Git          No               Yes (full history)
Build-time validation   No               Zod schemas + test suite
Vendor lock-in          Complete         None

Beyond the numbers:

  • SEO actually works - crawlers see static HTML, not a JavaScript loading spinner. Pages get indexed faster. Rich results appear in search.
  • AI engines cite our content - we show up in ChatGPT and Perplexity answers for queries in our domain. That wasn’t happening on Framer.
  • Full control over every pixel - no more “why is this component re-rendering on every page” or mysterious build steps.
  • Search regressions caught in CI - the FAQ regex incident can’t happen again.

Should You Do This?

Migrating off a no-code builder makes sense if:

  • Your site has grown beyond simple landing pages into a content platform (blog, glossary, comparisons)
  • PageSpeed is suffering because of JavaScript-heavy rendering
  • You need structured data for rich results and can’t add it in your current tool
  • AI search visibility matters and your site requires JS to render content
  • You want content in Git with typed validation, not locked in a proprietary editor
  • You have engineering capacity to build and maintain a custom site

Stay on your no-code builder if:

  • You have fewer than 10 pages and no blog
  • Speed of shipping visual changes matters more than PageSpeed scores
  • You don’t have engineering capacity to maintain a codebase
  • Your content doesn’t need structured data or AI visibility

The honest answer: if you’re running a SaaS marketing site on a no-code builder and wondering why your PageSpeed is garbage and your content doesn’t show up in AI search results, this is your sign. But the migration is real work. Don’t underestimate it.

For us, full control over the stack - no vendor lock-in, no mysterious build steps, no fighting the tool - was worth every hour of the migration.

Key Takeaways

  1. No-code builders are great for launching, not for scaling. Framer got us live fast. But client-side rendering, no structured data, and vendor lock-in became blockers as the site grew.
  2. Ship zero JavaScript by default. Astro’s static output means crawlers see your content instantly. If your marketing site ships 200KB of JavaScript, you’re handicapping your own SEO.
  3. Structured data should be generated, not authored. Schema factories that take typed inputs and produce valid JSON-LD eliminate an entire class of errors. Content writers should never see a @type field.
  4. Test your search infrastructure in CI. Treat structured data like application code. If a regex change can silently break FAQ rich results, you need automated tests.
  5. Git dates beat frontmatter dates for lastmod. Frontmatter dates go stale. Git commit history reflects reality. Handle shallow clones with a fallback.
  6. Publish llms.txt with disambiguation. AI engines need to know what you do and what you don’t do. The “What We Do Not Do” section prevents hallucinations.
  7. Budget 2x for the migration. Content extraction from no-code builders is manual and tedious. The engineering work is the easy part - content migration is what takes time.

Build with us

We're building the payments and billing platform for SaaS, AI, and digital products. Come help us ship.

View Open Positions