DEV Community

Cover image for Manual PageSpeed runs are honest. They also do not scale.
Apogee Watcher
Apogee Watcher

Posted on • Originally published at apogeewatcher.hashnode.dev

Manual PageSpeed runs are honest. They also do not scale.

Few teams pretend manual PageSpeed checks are useless. When you are about to merge a template change, or a client is pressing for an answer on one URL, opening PageSpeed Insights and reading the filmstrip is often the fastest path to a defensible answer. The tool is honest in the narrow sense: it shows what that lab profile saw on that run.

The problem is not honesty. It is surface area: how many URLs, templates, and deploys your process has to cover before "we checked it" means anything.

What manual runs still do well

Keep them in the toolbox for a short list of jobs:

  • Shipping decisions on a single critical route (checkout, signup, campaign landing page) where you can stare at the trace and argue from evidence.
  • Teaching a junior dev or a client why LCP moved when you removed one blocking script. One slow URL on a projector beats ten slides about Core Web Vitals in the abstract.
  • Sanity checks after a deploy when you already know which URL should have changed and you only need before/after.

Those moments reward a human with the tab open. They do not require a product pitch. They need time and attention you are happy to spend once.

Where the model cracks

Agencies rarely stop at one URL. You inherit ten sites, then twenty, each with multiple templates, environments that drift, and stakeholders who ask "is it still fast?" on Monday without remembering what you changed on Friday.

Manual PageSpeed runs do not give you:

  • History across deploys unless someone disciplined keeps screenshots and filenames straight.
  • Coverage of the long tail of URLs clients actually hit (campaign pages, old blog posts, third-party injected routes).
  • A single place where the team agrees what "good enough" means for LCP, INP, or CLS before the week gets noisy.

So the failure mode is not "we opened PSI once." The failure mode is treating occasional manual runs as governance for a portfolio. That is when answers drift into guesswork, or senior people burn time re-running the same five URLs whenever someone feels nervous.

Scale is a boring word for a boring spreadsheet

When we say manual checks "do not scale," we do not mean "hire faster clickers." We mean the work has to turn into repeatable routes, stored runs, and thresholds someone agreed in advance. Otherwise every week invents a new definition of green.

That is the handoff: from hero manual work to a system that still uses lab data, but does not depend on you remembering to open the tab.

Time, cost, and when each approach wins are in this walkthrough. It stays deliberately dry in places; portfolios need boring tables more than another motivational thread about performance culture.

Automation in practice: scheduling, storage, alerts

In our experience talking to agency teams, automation is not magic. It is scheduling plus storage plus alerts so regressions show up as facts instead of Slack rumours.

If you are ready to standardise how sites get added, how often key templates are tested, and how budgets map to notifications, the procedural spine lives in the setup guide:

We still open manual tools during incidents. We just refuse to pretend that workflow alone is a monitoring strategy when the URL count multiplies.

A small habit that bridges the gap

If you are not ready to wire a full platform yet, you can still fix the governance gap this week:

  1. Pick three URLs per client that represent revenue or reputation (not only the homepage).
  2. Write down one numeric threshold per metric you care about (even if it is stricter than Google's "good" cut-off).
  3. Run manual checks on a fixed weekday, paste the three numbers into the same note, same channel, every time.

That is not full automation. It is proof you know what "scale" would need to store. When the note becomes painful to maintain, you are probably ready for the boring system.

Closing

Manual PageSpeed runs stay honest, and they stay manual: bounded by calendar, memory, and who happened to be online. For a portfolio, honesty without history is a slower way to guess. Keep the tab for the moments that deserve a human. For everything else, put repeatable routes, stored runs, and agreed thresholds in place; the multi-site setup guide linked earlier is a sensible next read when you are ready to standardise that spine.

Top comments (0)