We deployed a fresh marketing site last week — this site, in fact. Static HTML, pre-rendered, served from Cloudflare's edge. No PHP, no JavaScript framework, no plugin ecosystem to drag the page down. Then we ran PageSpeed Insights on the homepage. Here's what came back.

85Performance
93Accessibility
100Best Practices
100SEO

Three greens, one yellow. The first instinct on seeing this is to nod approvingly at the greens and let the yellow slide. "Eighty-five is a passing grade. Most sites aren't even close."

For a site that markets itself as "sites that load in under a second", eighty-five is a credibility problem. We took the diagnostic apart to figure out where the time was actually going.

The score isn't the experience.

There's a comforting pattern in tech where we use a simple metric as proxy for a complex system. Lighthouse scores are this pattern in action. Five panels, four numbers, one composite. Most teams check the headline ("we got 95!"), congratulate themselves, ship. The scores roll up into something visually green; the team reads green as good; the conversation moves on.

But the scores are a weighted heuristic. They reward things that are easy to detect from a static analysis of your HTML — structured data, accessible markup, modern image formats, semantic landmarks. Those are all good things. They are also all easy to do, especially on a freshly-built static site. Three of the four panels scored 100 because the bar is "did you remember to add alt text and an h1". A WordPress site with the right SEO plugin can also score 100 there.

Performance, on the other hand, measures something that depends on the network, the device, and every choice made about what to load before the page paints. It's the only score on the panel that comes close to measuring user experience.

Eighty-five on Performance, with the rest at 100, is the diagnostic shape that says "we did the easy things and ignored the hard one." It's also the shape almost every "we got green!" celebration is actually showing — the green panels are the easy ones, the yellow panel is the one that matters, and most teams just look at the average.

The thing that actually mattered was hidden three clicks down.

Lighthouse exposes the "LCP breakdown" if you click into the diagnostic. LCP — Largest Contentful Paint — is the time between the user clicking the link and the most prominent element on the page being painted. In our case, that element was the page's H1: "Sites that load in under a second. Anywhere."

The breakdown went:

Time to first byte: 0 ms. Element render delay: 3,170 ms.

TTFB at zero milliseconds means Cloudflare's edge did its job perfectly — the HTML was on the user's machine the moment they asked for it. The remaining 3.17 seconds was the browser holding the H1 in memory and refusing to paint it. That's not a server problem. That's a browser saying "I have your H1, I refuse to paint it until your fonts load."

Three render-blocking resources were conspiring to delay that paint:

Resource
Origin
Cost
fonts.googleapis.com/css2?…
3rd-party
750 ms
cloudflare-static/email-decode.min.js
1st-party (auto-injected)
470 ms
/css/global.css
1st-party
160 ms

Each of these has a different story.

Google Fonts is the most innocuous-looking line of code in your <head>.

Google Fonts is the easiest thing in modern web development. Paste a <link> tag, get every weight and language in the world, free, CDN-distributed. We pasted ours. We got our weights.

We also added a third-party origin to the page. Different domain (fonts.googleapis.com), different DNS lookup, different TLS handshake, different certificate validation. All before we could read a single byte of font CSS. Then the CSS file referenced woff2 files at fonts.gstatic.com — yet another origin, yet another hop. The browser will not paint text in a font until that font has arrived; with font-display: swap it'll briefly use a fallback, but modern browsers paint LCP-blocked elements only after the font is resolved one way or the other.

The fix isn't to use a different CDN. It's to put the fonts on the same edge as the HTML — self-host. Same origin, no extra DNS, no extra handshakes. We downloaded the variable-axis woff2 files for the three families, dropped them in /fonts/, wrote our own @font-face declarations, removed the Google Fonts link entirely. Net change to the deployed site: one third-party origin gone, ~750ms of render-blocking eliminated, the fonts live on the same Cloudflare edge as everything else.

For a site that sells "all your content on one edge, near zero ongoing cost, no third-party dependencies", having a third-party font CDN baked into every page was a small piece of cognitive dissonance we'd been ignoring. Lighthouse caught it.

Cloudflare added a script to the page we didn't ask for.

The middle row in the culprit table is the most surprising one. We didn't add email-decode.min.js. Cloudflare did. It's a feature buried in the dashboard called "Email Address Obfuscation" — enabled by default — that scans HTML for mailto: links and rewrites them with an inline obfuscation, then injects a small JavaScript that lazy-decodes the email when a user hovers. The supposed purpose is to thwart email scrapers.

For some businesses that's useful — there are operations where the contact email is a personal address and the owner doesn't want it in spam-list scrapes. For ours, it's counterproductive: hello@web9.co.uk is a public business contact and we want it scraped, indexed, and remembered. The obfuscation script costs 470ms of render-blocking time for negative value.

We turned it off. The fix is one toggle in the Cloudflare dashboard. The lesson is that "automatically applied" optimisations can't be assumed to be optimisations for your specific case. Audit what your CDN is doing to your HTML.

Our own CSS file was render-blocking — fixable in one move.

The third row, our own /css/global.css, was the smallest delay (160ms) and the most easily fixed. The CSS file was small enough — about 4KB — that loading it as an external file was strictly worse than inlining it into the page's <head>. Inlining means no separate network request, no render-block waiting on it.

The reason teams default to external CSS is caching: a returning visitor on a second page gets the cached file. But for a static-site marketing page, most visits are first hits — the user clicked an ad, a search result, a shared link. Optimising for a returning-visitor cache hit comes at the cost of a worse first-impression paint. For a marketing site, that's the wrong trade.

We moved global.css into an inlined <style> block in the shared <head> partial. Same content, no extra request, no render block. The file on disk now exists as a deprecation note pointing future-developers at where the styles really live.

The fix is structural; the score is just a thermometer.

Three changes, none of them clever. Self-host the fonts. Disable an auto-injected script we didn't ask for. Inline a small CSS file. Total work: about an hour, including the time to find variable-axis font files and write the @font-face declarations correctly.

The point isn't that Web9 is fast now (we'll see in the next PSI run; this article is being published alongside the deploy). The point is that the diagnostic process matters more than the score. A team looking at "85 Performance, three 100s elsewhere" and concluding "we're broadly green, ship it" misses the thing that's actually slowing down their users. A team that drills into LCP, finds the render-blocking chain, asks "why is this third-party origin in our critical path?" — gets the actual fix.

Green doesn't always mean green. Sometimes it means "we did the easy things, and the hard thing is hidden three clicks down." The hard thing is usually a single innocuous-looking choice — a third-party CDN, an auto-enabled feature, an external file that didn't need to be external — and the fix is usually about un-doing it.

We'll re-run PSI on the deployed site. The expected new shape is Performance approaching the high 90s, LCP in the low single seconds, all four panels green for the right reasons rather than three out of four green for the easy reasons. We'll publish the before/after numbers when they land.

Either way, the lesson holds. Score-chasing optimises for the metric. Audit-driven optimisation optimises for the user. The user is the only one whose answer counts.

AI is making stuff up about your business. Here's how to make it stop.

The companion piece. Same diagnostic discipline applied to AI providers' summaries instead of Lighthouse scores: audit, identify the gaps, fill them with articulated content, measure that the model picks them up.

/writing/ai-is-making-stuff-up →