Brad Holmes web developer, designer and digital strategist.

Dad, husband and dog owner. If you’re here, you either found something I built on Google or you’re just being nosey. Either way, this is me — the work, the thinking, and the bits in between.

Brought to you by Brad Holmes

Core web vitals screenshot for brad-holmes.co.uk

Core Web Vitals: What Still Matters and What Doesn’t

Brad Holmes By Brad Holmes
11 min read

Core Web Vitals were introduced as a way to quantify how real users experience your site. What started as a Google-led initiative has become something broader: a shared language for developers, designers, and marketers to talk about performance in practical, user-focused terms.

But not all metrics stay relevant forever. Some fade out as better ones emerge. Others evolve in how they’re measured—and how much they actually affect your rankings or user experience.

This guide is for anyone trying to build sites that feel fast, stable, and smooth to use—regardless of trends or tool updates.

If you’re tired of chasing numbers and want to build genuinely performant, user-first websites, this is where to start.

The Core Web Vitals Metrics That Still Matter

Core Web Vitals are meant to reflect what users actually experience when they land on your site. I don’t treat site’s core web vitals as arbitrary scores—they’re signals that show whether a site feels fast, responsive, and stable. That’s what I care about, and it’s what I build for.

Here’s where I focus:

1. Largest Contentful Paint (LCP)

What it measures: The time it takes for the largest visible element—usually a heading or image—to appear in the viewport.

Why it matters: It’s the point where a user typically decides whether the site feels fast. Miss this and the whole experience drags.

Target: Under 2.5 seconds.

What I do about it:

  • Optimise image formats, compression, and delivery.
  • Strip out unnecessary render-blocking CSS and JS.
  • Lazy load anything not immediately visible.
  • Serve assets via a solid CDN and use proper caching.

2. Interaction to Next Paint (INP)

What it measures: The delay between a user’s interaction and the next visual response. It tracks responsiveness across the whole session, not just the first click.

Why it matters: This replaced FID for a reason—it gives a clearer picture of how interactive the site feels.

Target: Under 200 milliseconds, consistently.

What I do about it:

  • Break up long tasks and minimise main thread blocking.
  • Defer non-essential scripts.
  • Use web workers where possible.
  • Keep the interface light and reactive.

3. Cumulative Layout Shift (CLS)

What it measures: How much content moves around during load or interaction.

Why it matters: Layout shift is one of the fastest ways to make a site feel broken—even if it technically works.

Target: A CLS score below 0.1.

What I do about it:

  • Always define dimensions for images, videos, and embeds.
  • Avoid inserting elements above existing content.
  • Use transform-based animations instead of layout-shifting ones.

These are still the three metrics that matter most. I don’t optimise for them just to hit a score—I do it because they align with how people actually use the web. If I can make a site load fast, respond instantly, and stay visually stable, the rest tends to follow.

Screenshot of google search console core web vitals
Screenshot of google search console core web vitals for brad-holmes.co.uk

What’s Changed and What Doesn’t Matter Anymore

The way we measure performance has shifted. Some older metrics have been retired, and some of the advice floating around is just outdated. I’ve stopped chasing numbers that don’t reflect real user experience—and I’ve adjusted where I put my energy.

Here’s what’s changed:

First Input Delay (FID) is Gone

Google replaced FID with INP for a reason. FID only measured the delay on the first interaction, which didn’t give the full picture—especially on longer sessions or interactive apps.

INP looks at all interactions and gives a more honest view of how responsive your site feels. It’s a better metric, and it’s what I focus on now.

If you’re still trying to fix FID in your audits, you’re working on a metric that no longer counts.

Lighthouse Scores ≠ Real Performance

I still use Lighthouse during dev, but I don’t treat its score as gospel. It’s a lab test—it doesn’t reflect what real users with real devices and connections actually experience.

What matters more is:

  • Field data from Chrome User Experience (CrUX) reports.
  • Search Console’s Core Web Vitals report.
  • How your site behaves under actual traffic and interaction—not just a synthetic test.

PageSpeed Insights Got Smarter

Google’s updated PageSpeed Insights is now more aligned with how real performance is measured. There’s a new “Insights” experience that focuses less on theoretical audits and more on actionable guidance.

It’s worth using—but I still combine it with real-world monitoring tools when making decisions.

Site Speed Tricks That Don’t Work Anymore

Some things that used to be quick wins now either do nothing—or make things worse:

  • CLS fixes that hide layout problems: Stuffing content into containers or using invisible placeholders without actual layout planning? Doesn’t fool the metric anymore.
  • Over-optimising fonts with flash of invisible text (FOIT): That just creates a worse UX. I preload key fonts properly and keep fallback styles readable.
  • Aggressive lazy loading: Delaying everything might help your Lighthouse score, but it kills perceived speed. I lazy load the right things—not everything.

The goal isn’t to game the system—it’s to build a site that feels smooth and responsive in real use. That’s what the newer tools and metrics are pushing us toward, and it’s the direction I build in.

How I Actually Improve Core Web Vitals

I don’t chase scores—I fix what makes a site feel slow, janky, or unresponsive. Most of the performance gains I get come from a few focused actions. Here’s how I approach it in practice:

For LCP (Largest Contentful Paint)

I make sure the most important content loads first—and loads fast.

  • I serve images in modern formats (WebP or AVIF), sized properly, and compressed without killing quality.
  • I lazy load everything below the fold—but never above it.
  • I preload key assets like hero images or fonts if they’re part of the LCP.
  • I strip out unnecessary scripts and keep the critical path clean.

If the first thing your user sees is an empty layout or a spinner, you’ve already lost momentum.

For INP (Interaction to Next Paint)

I keep the main thread free and the interface lightweight.

  • I break long tasks into smaller ones, especially with third-party scripts.
  • I avoid bloated frameworks when a simpler approach works.
  • I defer non-essential scripts and load interaction-heavy features only when needed.
  • I watch what happens after a click—if a user taps something and nothing responds for half a second, that’s a problem.

Responsiveness isn’t about how fast the page loads—it’s about how quickly the site reacts to you once it’s up.

For CLS (Cumulative Layout Shift)

I design with structure in mind. Layout shifts are often the result of laziness or guesswork.

  • I always define sizes on images, videos, embeds, and iframes.
  • I don’t inject content from the top unless the user asks for it (like a menu or alert).
  • I avoid “shift-on-hover” UI that causes reflows.
  • I test animations carefully—transform and opacity are fine, but margin or height changes can wreck stability.

Good layout doesn’t just look nice—it feels stable. That’s what CLS is really measuring.

I treat each of these not as checkboxes, but as reflections of how well a site respects its users’ time, attention, and device. Fix the underlying experience, and the metrics usually take care of themselves.

Common Pitfalls and Misunderstandings About Performance

A lot of performance issues don’t come from bad code—they come from misplaced priorities. I see the same mistakes crop up over and over. Here are a few I avoid:

Screenshot of lighthouse core web vital scores for brad-holmes.co.uk
Screenshot of Lighthouse mobile scores for brad-holmes.co.uk website.

Chasing a “100” Lighthouse Score

I’ve had clients ask for a perfect score like it’s a badge of honour. Truth is, it’s not realistic—or even necessary.

Lighthouse scores vary by device, location, and even which browser runs the test. A 90+ score that reflects a genuinely good experience is more than enough.

I care more about:

  • First impressions (LCP)
  • Usability (INP)
  • Stability (CLS)
  • And real-world metrics, not synthetic lab tests

Lighthouse is just a tool, not a target.

Overloading the Homepage

Trying to cram everything into the homepage usually leads to slow load times, layout shifts, and a confusing user flow.

I keep things intentional:

  • One hero image, not five
  • One primary CTA, not three
  • Only the scripts and styles that need to be there

Minimalism isn’t just a design choice—it’s a performance strategy.

Too Many Third-Party Scripts

Chat widgets, analytics, heatmaps, tag managers… they add up fast. Every script runs code, blocks the thread, and adds risk.

I only load what’s essential—and defer or lazy load the rest.
If something adds more weight than value, it’s gone.

Not Testing on Real Devices

Performance on a MacBook Pro over fibre isn’t the same as performance on a mid-range Android over 4G.
I test using throttled conditions and actual phones whenever I can.

If it runs well there, it’ll run well anywhere.

Ignoring Performance After Launch

Performance can drift. A site might launch fast and clean, then slow down over time as plugins, tracking tools, or content bloat get added.

That’s why I build with monitoring in place—Search Console, CrUX data, and alerts if things drop off.
I treat performance like uptime. It’s not a one-time task.

Avoiding these traps is half the battle. Most performance wins don’t come from clever hacks—they come from discipline, clarity, and keeping things lean on purpose.

The Tools I Use to Monitor Core Web Vitals

Performance isn’t just something I fix during development and forget about. It’s something I track, audit, and maintain—because sites change, content grows, and plugins sneak in. Here’s what I rely on:

Google Search Console

I start here. The Core Web Vitals report in Search Console gives me field data straight from real users. If something’s slow, unstable, or unresponsive for them, this is where I’ll see it.

It tells me:

  • Which URLs are underperforming
  • Which specific metric is failing
  • Whether the problem is mobile, desktop, or both

It’s the closest thing to user reality I can get for free.

Pro tip: Use the right format, right size. Swapping out oversized JPEGs for properly sized WebPs is one of the quickest wins for page speed.

PageSpeed Insights

Still useful—especially now that it reflects INP and aligns more closely with CrUX data. I use it to spot specific issues on a page-by-page basis.

It’s especially good for:

  • LCP element detection
  • Script blocking diagnostics
  • Recommendations that make sense in context (not just generic advice)

I don’t chase the score—but I use the insights.

Chrome DevTools

For real-time testing and debugging, I use DevTools. The Performance tab lets me dive into long tasks, scripting bottlenecks, and render times.

It’s where I catch:

  • JavaScript blocking input
  • Layout thrashing or reflows
  • Interaction lag tied to specific UI elements

It’s raw, but powerful.

CrUX Vis

A newer tool, but worth knowing about. It gives me a visualised trendline of Core Web Vitals performance over time, straight from Chrome UX data. Great for spotting regressions or improvements over weeks and months.

The key is not just having tools—it’s knowing how to interpret them. I don’t fix warnings blindly. I use these tools to find real issues that affect real users, then work backwards from there.

Bringing It All Together

Core Web Vitals aren’t just about performance scores—they’re about how your site feels to real people. Fast load times, responsive interactions, and stable layouts aren’t optional anymore—they’re expected.

I don’t treat these metrics like a checklist. I treat them as signals that I’m building something that respects the user’s time, device, and attention. That means:

  • Prioritising meaningful content in the load order
  • Keeping interfaces lightweight and responsive
  • Designing with stability, not just style, in mind
  • Avoiding performance drift by monitoring after launch

I broke down exactly how I talk about all of this—without the jargon, and with a focus on what actually matters—in this piece on website performance metrics.

The tools will keep evolving. The scoring systems will change. But the fundamentals won’t: make things fast, clear, and solid to use.

If I’m doing that, the Core Web Vitals tend to take care of themselves.

Related Articles

Brad Holmes

Brad Holmes

Web developer, designer and digital strategist.

Brad Holmes is a full-stack developer and designer based in the UK with over 20 years’ experience building websites and web apps. He’s worked with agencies, product teams, and clients directly to deliver everything from brand sites to complex systems—always with a focus on UX that makes sense, architecture that scales, and content strategies that actually convert.

Thanks Brad, I found this really helpful
TOP