DEV Community

Cover image for Facebook Said Our Ads Were Spam. Our Ads Were Fine. Our infrastructure was not.
Preston "Brady" Adger
Preston "Brady" Adger

Posted on

Facebook Said Our Ads Were Spam. Our Ads Were Fine. Our infrastructure was not.

Last week I woke up to a flood of ad rejections in Facebook Ads Manager. Every rejection said the same thing: spam.

panicking.gif

If you've dealt with Facebook's rejection reasons before, you know "spam" is basically their way of saying "something's wrong, go figure it out." It could be your copy, your landing page, your URL structure, your pixel, a word choice Facebook doesn't like that week — anything. Super useful.

So I did the obvious thing: I started tearing apart the landing page content. Reread every headline. Questioned every CTA. Wondered if "free" was a banned word now.

Hours of this. Nothing.

Then I checked the server metrics.

The actual problem

Our p99 response times were dead. Pages were either crawling or not loading at all. And when Facebook's systems crawl your landing pages to validate ads before approving them, a page that doesn't respond fast enough just gets rejected — apparently with the label "spam."

So the content was fine. The server was the problem.

But we don't get much traffic to these pages. Why was the server struggling?

I pulled the logs.

Facebook was sending 30 requests per page within a single second.

before_architecture.png

We have 950 active ads. Each one points to a landing page. Facebook crawls all of them — repeatedly, and apparently all at once. One server, never designed for that kind of burst, had no shot.

The fix

Once the actual problem was clear, the solution was straightforward: add as many caching layers as possible between Facebook's crawlers and the origin server.

Next.js fetch cache

The frontend was fetching fresh data from our CMS on every single request. Switched to using Next.js's built-in fetch cache to persist responses to disk. Now when 30 crawlers hit the same page in a second, one of them does the work and the rest get the cached response.

In-memory cache on the CMS

Added a lightweight in-memory cache on the CMS side as a fallback. Anything that slips past the Next.js cache doesn't hit the database.

A missing database index

The Slug field in our CMS is defined as a unique field. You'd expect that to come with an automatic database index. It didn't. Every page lookup was doing a full table scan.

system_diagram.gif

Added the index manually. Query times dropped immediately.

Cloudflare CDN

Threw Cloudflare in front of the whole stack. Repeat requests for the same page get served from the edge and never touch the origin server at all.

after_architecture.png

One extra thing: cache warmup

After a server restart, all the caches are cold. The first wave of requests hits the origin just as hard as before the fix — not great when Facebook can crawl at any time. Added a job to pre-warm the cache with the most recent landing pages on startup so there's no cold-start window. Also, before Facebook crawlers ever see the landing pages, they are audited to ensure they are healthy, thus caching every single one before being sent off to Facebook.

Where it ended up

Before: p99 response times were effectively a flatline. Pages timing out under crawler load.

After: p99 averaging 500ms. Best case hitting 25ms.

The ad rejections cleared up once the pages were actually loading for Facebook's crawlers.

The whole thing was a performance problem that looked like a content problem. If you ever wake up to a wave of spam rejections with no obvious content reason, check your server metrics before you spend two hours rewriting headlines.


Running into something similar? Happy to compare notes.

Top comments (0)