I asked Perplexity "what is the cheapest price API for AI agents?" — and got back three competitors, none of them me.
My API was ranking for humans. Invisible to agents. Here's the three-layer fix.
I'm building AgentShare solo from Vietnam — a price/offer data layer for AI agents. The whole point of the product is being agent-readable, so failing the "can an AI find me" test was a wake-up call.
This article is the receipt: what I implemented, what each layer is for, and what changed.
The mental model: three layers, not one
Most "SEO for APIs" articles stop at sitemap + meta tags. That works for human search, but AI agents read your site through a different stack. After staring at logs for a week I started thinking about it like this:
Three crawler tiers, three different signals. The same API needs to satisfy all of them.
| Layer | Who reads it | Files / signals |
|---|---|---|
| 1. SERP / human-search | Google, Bing |
sitemap.xml, robots.txt, Open Graph, Schema.org JSON-LD
|
| 2. Agent-native discovery | ChatGPT search, Perplexity, Claude crawlers |
llm.txt, agent.json, /.well-known/*
|
| 3. Live access | Cursor, Claude Desktop, your custom agent | MCP Streamable HTTP, OpenAPI 3, REST |
If you only do Layer 1, you'll show up on Google but never inside an AI answer. If you only do Layer 3, your MCP works but no agent finds it organically. You need all three.
Layer 1 — Make Schema.org tell the truth about your pricing
I had Open Graph cards and a sitemap. What I didn't have was structured data on the pricing page. That's the difference between Google rendering a generic blue link and Google showing the actual $0 / $9 / $39 tiers right in the search result.
Here's the JSON-LD I emit on /pricing (trimmed for clarity):
{
"@context": "https://schema.org",
"@graph": [
{
"@type": "SoftwareApplication",
"@id": "https://agentshare.dev/#software-application",
"name": "AgentShare Price API",
"applicationCategory": "DeveloperApplication",
"operatingSystem": "Cloud (HTTPS)",
"offers": [
{
"@type": "Offer",
"name": "Free",
"price": "0",
"priceCurrency": "USD",
"availability": "https://schema.org/InStock",
"eligibleQuantity": {
"@type": "QuantitativeValue",
"value": 100,
"unitText": "requests/month"
}
},
{
"@type": "Offer",
"name": "Basic",
"price": "9",
"priceCurrency": "USD",
"priceSpecification": {
"@type": "UnitPriceSpecification",
"price": "9",
"priceCurrency": "USD",
"billingDuration": "P1M"
}
},
{
"@type": "Offer",
"name": "Pro",
"price": "39",
"priceCurrency": "USD",
"priceSpecification": {
"@type": "UnitPriceSpecification",
"price": "39",
"priceCurrency": "USD",
"billingDuration": "P1M"
}
}
]
}
]
}
Two non-obvious lessons:
Don't include an
Offerwith a non-numericprice. I tried adding an "Enterprise (Custom)" tier. Google's Rich Results validator throws an error and that error invalidates the rich result for the whole page. Enterprise still lives in the HTML pricing card — it just doesn't enter the JSON-LD.Use
UnitPriceSpecificationwithbillingDuration: "P1M"on subscription tiers so the SERP shows "$9/month" instead of a confused "$9".
Google's official rich-result validator confirming the SoftwareApplication + 3 offers structure is eligible.
After deploying, Google Rich Results Test reported 1 valid item · Eligible for rich results. That was the first metric that actually moved.
Layer 2 — The agent-native files nobody talks about
This is the part I underestimated the most. Google has 25 years of crawler infrastructure; AI agents have 18 months. They look for very specific, named files at very specific paths:
-
/.well-known/agentshare.json(or/agent.json— pick one and link the other) -
/llm.txt(human-readable summary for an LLM) -
/llms-full.txt(full structured outline of your API surface) -
/.well-known/mcp.json(announces your MCP endpoint)
A minimal agent.json looks like this:
{
"name": "AgentShare Price API",
"version": "1.0",
"description": "Structured price and offer data for AI agents.",
"endpoints": {
"rest": "https://agentshare.dev/api/v1",
"mcp": "https://agentshare.dev/mcp",
"openapi": "https://agentshare.dev/openapi.json"
},
"auth": { "type": "api_key", "header": "X-API-Key" },
"register": "https://agentshare.dev/api/v1/auth/register",
"pricing": "https://agentshare.dev/pricing"
}
Why this matters: when an AI assistant lands on your site, it doesn't read your hero copy. It fetches agent.json, finds the auth model, the MCP endpoint, the OpenAPI URL — and decides in one round-trip whether to use you.
I also added Allow: GPTBot, Allow: ClaudeBot, Allow: Google-Extended in robots.txt, plus Cloudflare content signals:
content-signal: ai-input=yes
content-signal: ai-train=no
content-signal: search=yes
ai-input=yes says "agents may use my data to answer a user's live query" — exactly the deal a price API wants. ai-train=no keeps you out of training corpuses. search=yes is the AI-search-engine green light.
Layer 3 — MCP is the consumption layer, not the discovery layer
I think a lot of devs confuse "having an MCP server" with "being discoverable to agents". They're orthogonal.
MCP (Model Context Protocol) is how a connected agent consumes your API once it's already decided to use you. It's brilliant for that — Claude Desktop, Cursor, and an increasing list of clients can plug into a Streamable-HTTP MCP endpoint and call your tools as if they were native.
But MCP doesn't get you discovered. An MCP endpoint that nobody knows exists is the API-equivalent of an unlisted YouTube video. So I expose all three:
- REST:
https://agentshare.dev/api/v1/search,/offers/best, … - MCP Streamable HTTP:
https://agentshare.dev/mcp - OpenAPI:
https://agentshare.dev/openapi.json
…and I link to all of them from agent.json (Layer 2), and I rank for them via SoftwareApplication schema (Layer 1). The three layers are designed to feed into each other.
What actually changed
Concrete, not vanity:
- Bing Webmaster Tools: sitemap accepted, 9 URLs indexed within 24 hours.
Bing accepted the sitemap within 24 hours of submission; all 9 public pages discovered.
Google Rich Results Test:
1 valid item detected · SoftwareApplication · 3 offers eligible for rich snippets.schema.org validator:
0 errors, 0 warnings.
Independent schema.org validator agrees: 0 errors, 0 warnings, 2 entities detected (SoftwareApplication + WebAPI).
- Open Graph debugger: per-page unfurl with price + description (no more generic favicon on X/LinkedIn).
-
AI agent surface: when a Cursor user adds
https://agentshare.dev/mcpto their MCP config, the tools auto-discover with full schema — no manual config.
It took roughly two evenings end to end. The hardest part wasn't writing the JSON-LD; it was reframing the question from "how do I rank on Google?" to "how does an AI know I exist?"
⚡ What I'd do differently (so you don't make the same mistakes)
If I started over tomorrow, these three changes would save me the most time:
1. Start with Layer 2. If you're API-first,
agent.jsonandllm.txtare higher-leverage per hour than perfect Open Graph cards. I did them last. Do them first.2. Don't conflate "indexed" with "visible". Bing showed
indexed: yes24h after my first deploy. The page was technically indexed but had no Schema.org, no per-page Open Graph, and a 95-character title. Pass the SEO basics first, then optimise.3. Validate every layer with a separate tool. Google Rich Results, schema.org validator, Bing Webmaster, OG debugger, Perplexity manual probe. They catch different things. Don't assume one green checkmark means you're done.
Try it
If you're building anything in the AI agent space, all four discovery URLs are public, no auth:
- 🌐 agentshare.dev — the data layer for agent commerce
- 📄 agentshare.dev/agent.json — agent-native discovery file
- 🤖 agentshare.dev/llm.txt — LLM-readable summary
- 📚 agentshare.dev/docs — REST + MCP quickstart
- 💲 agentshare.dev/pricing — free tier, no card (100 req/month)
- 🔑 agentshare.dev/signup — email + password, key shown once
Or just point your favourite MCP client at https://agentshare.dev/mcp with X-API-Key: YOUR_KEY and watch the tools auto-load.
Built solo from Vietnam 🇻🇳 — no team, no funding. If this was useful, the best signal you can send back is a comment about what your stack does for agent discoverability. I'm taking notes.




Top comments (0)