DEV Community

Cover image for We shipped a 73-article help center in a week with Claude Code — and ChatGPT started citing us 3 days later
Gunpal Jain
Gunpal Jain

Posted on

We shipped a 73-article help center in a week with Claude Code — and ChatGPT started citing us 3 days later

TL;DR — We used Claude Code with Playwright browser automation to write 73 help articles + 95 annotated screenshots for our Loan CRM in about a week. We added a /llms.txt file to the site root. Three days later, ChatGPT started answering product-specific questions about our app with citations back to our help center. Here's the actual stack, the workflow, and the failure modes we hit.

I'm one of the builders at Argentron Technologies, working on SigmaDSA — an AI-powered Loan CRM for DSAs (Direct Selling Agents) in the Indian lending ecosystem. The product is real and shipping: leads pipeline, 9-stage file workflow, a Bank Statement Analyzer that parses 50+ Indian bank formats, WhatsApp + Telegram bots, the works.

What we did not have, until last week, was documentation.

This is the post-mortem of how we fixed that in a week of focused evenings, with two surprising wins along the way: we didn't have to hire a technical writer, and we accidentally got into ChatGPT's answer set without paying for an SEO agency.

The stack

Nothing exotic. The combination is what matters:

  • Claude Code — Anthropic's terminal-based coding agent. Reads the codebase, edits files, runs commands, drives Playwright.
  • Playwright — for navigating the live app and capturing controlled-viewport screenshots.
  • Next.js 16 + Fumadocs — the docs site framework. MDX content, automatic sidebar, structured data hooks, App Router everything.
  • Tailwind 4 for styling, Framer Motion for the aurora-gradient hero.
  • The llms.txt convention — a markdown index at the site root for LLM consumption.

The marketing site (Next.js) and the CRM (Angular 21 + ABP 10 + .NET 10 + MongoDB) are separate deploys. The help center is part of the marketing site. Both auto-build on commit.

The loop

The actual workflow looked like this:

me: "Now do the Files section. Top-level article + the 8 sub-articles
     in the File Detail tab folder, then the 10-article Credit Evaluation
     series."

claude code:
  1. greps the Angular routes for /files/*
  2. reads the FilesComponent + child components to understand
     what each tab does
  3. spins up Playwright, logs into the live app, navigates the screens
  4. captures screenshots at a fixed viewport (1920x2800) into
     content/help/files/
  5. annotates each screenshot with orange boxes and numbered badges
     using sharp+svg compositing
  6. writes 19 MDX articles with frontmatter, steps, FAQs, and JSON-LD
  7. updates the meta.json sidebar config
  8. runs `next build` to verify everything compiles
Enter fullscreen mode Exit fullscreen mode

I'd review the result, point at one or two things, iterate. The total elapsed wall-clock per section was 20–60 minutes depending on depth.

What broke (the honest part)

Every "we used AI to build this" post that skips the failure modes is misleading. Here are the real ones:

1. Playwright fullPage screenshots reflow the viewport mid-capture.

The Ynex theme we use in the CRM is responsive. When fullPage: true resizes the viewport to capture the entire page, the sidebar collapses into mobile mode partway through the shot, clipping content with LF- prefixed file numbers cut in half.

Fix: set a fixed tall viewport before the screenshot and don't use fullPage.

await page.setViewportSize({ width: 1920, height: 2800 });
await page.screenshot({ path: 'files-list.png' }); // no fullPage
Enter fullscreen mode Exit fullscreen mode

This pattern alone saved a dozen reshoots.

2. MDX expression placeholders that look like prose.

A help article said "Enter the applicant's {name} and {phone}" in plain prose. MDX evaluated {name} and {phone} as JS expressions and the build failed with ReferenceError: name is not defined.

Fix: wrap inline placeholders in backticks so MDX treats them as code.

Enter the applicant's `{name}` and `{phone}`.
Enter fullscreen mode Exit fullscreen mode

3. YAML colons in FAQ answers.

We use a custom MDX front-matter block for FAQs that builds the FAQPage JSON-LD. An answer like "DSCR is calculated as: Net Operating Income / Total Debt Service" blew up the YAML parser because of the unquoted colon.

Fix: quote the answer string or remove the colon. We picked single-quotes globally to avoid having to think about it per-answer.

4. The first annotation style I let the agent pick was wrong.

Round one was dotted-line callouts radiating from a single image. It looked like a circuit diagram. I sent a screenshot of the previous style we'd liked — orange boxes with numbered corner badges — and said "do it like this." Two iterations later it was right and stayed right for all 95 screenshots.

5. ABP-specific quirks the agent didn't know.

ABP Framework Roles need IsPublic = true to appear in the user-assignment dropdown. The agent wrote a Roles article that said "save the role and assign it to a user," missing the public flag. I corrected once; it stuck for every subsequent admin article.

The pattern: about 15 correction-cycles across the entire build, each about a sentence long. Total time spent on corrections ≈ 45 minutes.

The /llms.txt surprise

While building the docs site, we added one file at the root: /llms.txt. It's a markdown index of the entire site in the format defined at llmstxt.org.

Conceptually it's the LLM equivalent of sitemap.xml. Practically, it's a curated, prose-friendly list of every URL on your site with one-line summaries grouped by section:

# SigmaDSA

> AI-powered Loan CRM for DSAs and connectors in the Indian lending market.

## Marketing
- [Home](https://sigmadsa.com): product overview
- [Pricing](https://sigmadsa.com/pricing): four tiers — CRM, Pro, AI, Custom

## Help — Leads
- [Adding a Lead](https://sigmadsa.com/help/leads/new): step-by-step
- [Lead Sources](https://sigmadsa.com/help/leads/sources): managing where leads come from
...
Enter fullscreen mode Exit fullscreen mode

The Next.js implementation is a single route handler:

// src/app/llms.txt/route.ts
import { source } from '@/lib/source';

export async function GET() {
  const pages = source.getPages();
  const body = [
    '# SigmaDSA',
    '',
    '> AI-powered Loan CRM for DSAs and connectors.',
    '',
    ...pages.map(p => `- [${p.data.title}](${p.url}): ${p.data.description}`),
  ].join('\n');

  return new Response(body, {
    headers: { 'Content-Type': 'text/markdown; charset=utf-8' },
  });
}
Enter fullscreen mode Exit fullscreen mode

That's it. Deploy. Three days later:

  • "How do I set up WhatsApp Cloud API for a loan CRM in India?" → ChatGPT now cites our /help/admin/whatsapp walkthrough.
  • "What's a good DSCR for a business loan?" → Our /help/files/credit-evaluation/dscr article is referenced.
  • "BotFather Telegram bot for CRM" → lands on our setup guide.

We didn't pay for AEO consulting. We didn't build backlinks. We added one file and the answer engines did the rest. The traditional path for getting cited by ChatGPT is "write a lot of content, build a lot of backlinks, hope to outrank competitors on Bing." A /llms.txt short-circuits that.

If you're building a B2B product where buyers ask ChatGPT before they ask Google, this is durable distribution and it's nearly free to add.

What we actually shipped

For anyone considering doing this:

  • 73 articles across 11 top-level sections
  • 95 annotated screenshots with consistent orange-box + numbered-badge style
  • 103 statically prerendered pages at build time
  • 14 modal walkthroughs — Add Bank Login, Record Sanction, Record Disbursal, Add Lender, Add Partner, etc. Each with a fully annotated screenshot showing every numbered region.
  • JSON-LD per article — Article + BreadcrumbList + HowTo + FAQ schemas where applicable
  • Sitemap + robots.txt + llms.txt generated dynamically from the same source

The full site is at sigmadsa.com/help. The LLM index is at sigmadsa.com/llms.txt. Both rebuild on every deploy so the help stays in sync with the product.

Patterns worth stealing

A few things that worked well enough to use again:

Fix the viewport, don't fullPage. Save yourself the responsive-reflow pain.

Annotate with SVG composited via sharp. Not Photoshop, not a callout generator. A small script that takes (x, y, width, height, number) and burns it onto the PNG. Reproducible, scriptable, agent-friendly.

Treat docs as code. MDX in the same repo as the marketing site. Sidebar config in JSON. Sitemap, robots, llms.txt all generated from source.getPages(). Adding a new help article means adding a file and committing — no CMS, no review queue, no separate deploy.

Use Fumadocs' ---separator--- meta.json syntax to group sidebar entries without changing URLs. We grouped Admin into Users & Roles / Workflow Catalogue / Notifications / AI Bots and Setup into Branding & Website / Email / Domain & DNS after the agent finished writing — no URL refactors required.

Keep a tight feedback loop. I directed at the section level ("now do Reports"), not the article level. The agent decided sub-structure, I corrected sub-structure once if needed. This kept my involvement at strategic granularity instead of micromanaging each article.

What this means

If you're sitting on undocumented product, the calculus has changed.

Two years ago you had three options: hire a writer (slow + expensive), do it yourself (slow + you won't), or skip it (slow death by support tickets). None of those were good.

Now there's a fourth: pair a coding agent with browser access, point it at your live app, treat docs as code. The agent reads your codebase, drives the UI, captures real screenshots, writes the prose, builds the SEO structured data, updates the sitemap. You direct the work and correct the mistakes.

A week of focused evenings for what would have been a quarter of writer salary. And the bonus prize — your product ending up in ChatGPT's answer set — is no longer a side project. It's an artifact of doing the documentation well in the first place.

If you're trying this on your own product and want to compare notes, my DMs are open. Or browse the result we shipped: sigmadsa.com/help.


Argentron Technologies builds SigmaDSA — an AI-powered Loan CRM for DSAs and connectors. Based in India.

Top comments (0)