Brad Holmes web developer, designer and digital strategist.

Dad, husband and dog owner. Most days I’m trying to create fast, search-friendly websites that balance UX, Core Web Vitals, and digital strategy from my studio in Kettering, UK.

If you’re here, you either found something I built on Google or you’re just being nosey. Either way, this is me, the work, the thinking, and the bits in between.

Brought to you by Brad Holmes

Landing page testing

A/B Testing Frameworks for Landing Pages: Smarter Experiments, Better Conversions

Brad Holmes By Brad Holmes
10 min read

A/B testing sounds like a simple win: change one thing, see what works better, scale the winner. But most teams don’t get results — not because A/B testing is flawed, but because the way they’re running tests is.

Here’s where it goes wrong:

  • No framework: Changes are made based on gut feeling, not structured thinking.
  • Too many variables: People tweak five things at once, and even if conversion improves, they have no idea why.
  • Chasing clever ideas: Headlines get rewritten with flair, but clarity tanks. Or a “minimalist” design hides the CTA.
  • Calling it too early: A test runs for a week, gets 50 conversions, and someone shouts “Winner!” without statistical significance.

The truth is, A/B testing isn’t a growth hack — it’s a discipline. And like any discipline, you need structure, focus, and a bit of patience.

If your tests aren’t moving the needle, it’s probably not because A/B testing doesn’t work. It’s because the setup is broken.

What an A/B Testing Framework Actually Does

An A/B testing framework gives your experiments a backbone. Without one, it’s just guesswork dressed up in data.

Here’s what a solid framework actually does for you:

  • Keeps your team focused – You know exactly why you’re testing something and what you expect to happen.
  • Avoids wasted effort – No more random tweaks or endless “what if we tried this?” debates.
  • Improves learning – Even when a test fails, the insight is clear: you know what didn’t work and why.
  • Standardises process – You don’t reinvent the wheel with every test. The process becomes repeatable and scalable.
  • Aligns stakeholders – Whether you’re answering to a client, CMO, or your own team, a framework lets you explain what you’re testing and what success looks like.

Think of it like engineering: you wouldn’t build a bridge without a blueprint. Testing is the same — if it’s going to support real business decisions, it needs structure.

A/B testing isn’t just about finding wins. It’s about building a system that tells you how to win more often.

Core Elements of a Solid A/B Testing Framework

Before you start designing variants, you need to get the structure right. These are the non-negotiables — the things every effective testing framework should include:

1. Hypothesis-First Thinking

Start with a clear statement:

“We believe [this change] will result in [this outcome] because [this insight].”

No guesswork. No vague hope. Just a structured reason for why you’re doing it.

2. Isolated Variables

Change one thing at a time. That’s it. Test a headline, or a button, or an image — not all three.
If you test multiple variables at once, you’ll never know what caused the result.

3. Pre-Test Sizing

Don’t start a test if you don’t have enough traffic or conversions to reach statistical significance. Otherwise, it’s just noise dressed as data.

Use a simple test calculator upfront. Set minimum detectable effect, confidence level, and sample size. If you can’t hit it, reconsider the test.

4. Clear Primary Metric

Decide what you’re optimising for — and be strict.
Is it signups? Add-to-cart? Lead form completions?
Don’t move the goalposts once the test is running.

Use secondary metrics for insight, not decision-making.

5. Timeboxing

Set your minimum test duration — usually at least 1–2 full business cycles.
Don’t declare a winner after 3 days just because the variant looked promising.
If you peek early, you risk false positives.

You don’t need fancy tools to get this right. A Notion doc and a bit of discipline goes a long way. Frameworks exist to protect you from bias, bad decisions, and wasted effort.

Proven Frameworks You Can Steal

There’s no shortage of testing advice online. But most of it skips the actual structure. These frameworks aren’t theory — they’re battle-tested models used by teams who treat testing as a core part of growth.

a. ConversionXL’s Testing Ladder

This model forces you to fix the foundations before chasing creative wins. It’s a prioritisation ladder:

  1. Technical issues – Are things broken? Fix them first.
  2. UX problems – Is the journey confusing or slow?
  3. Copy and messaging – Does the page actually say the right thing?
  4. Design and layout – Only now do you test visuals and hierarchy.
  5. Offers and pricing – High-impact, but only when the rest is solid.

If you jump straight to design tweaks without fixing clarity or load times, you’re skipping steps. This ladder keeps your focus where it matters.

b. PIE Framework

Potential. Importance. Ease.

Score every test idea based on:

  • Potential – How much upside is there if this works?
  • Importance – How critical is the page or step to the funnel?
  • Ease – How difficult is this to build/test?

It helps kill the “wouldn’t it be cool if…” ideas before they burn resources. You want high potential, high importance, low effort tests.

c. The LIFT Model

This one’s about friction and clarity. Look at your control and your variant through six lenses:

  • Value Proposition
  • Clarity
  • Relevance
  • Distraction
  • Urgency
  • Anxiety (risk/hesitation)

Run your page through this before you even think of a variant. It’ll help you spot what’s missing — or what’s getting in the way.

d. Pillar–Cluster Testing

Good if you’re running lots of experiments or have multiple audiences.

  • Pillars = Big foundational changes (e.g. layout, headline strategy, offer type).
  • Clusters = Micro-optimisations (e.g. button colour, icon placement, form label wording).

Use pillar tests to validate direction. Use cluster tests to sharpen the details once the base is proven.

These frameworks don’t replace strategy — they give it structure. Use them to guide your ideas, prioritise effort, and explain your approach to stakeholders who don’t live in Google Optimize all day.

Testing in 2025: What’s Changed

The fundamentals of A/B testing haven’t changed. But the context has — dramatically. If you’re still running tests the same way you did in 2018, you’re probably missing the mark.

1. Personalisation Is Everywhere — and It Breaks Clean Tests

With tools like AI-based content delivery, personalisation layers, and behavioural targeting, not every user sees the same version of your page.
That makes clean, split-path testing harder.

Tip: Segment your audience where possible. Run tests on stable user types (e.g. new visitors only, or mobile traffic only) to keep noise down.

2. Multi-Device Journeys Are the Norm

A user might land on mobile, click a CTA, but convert on desktop three days later.

That skews your conversion attribution if you’re only looking at last-click or session-based results.

Tip: Use tools that allow cross-device tracking or focus on deeper funnel events that show real buying intent, not just surface clicks.

With third-party cookies fading out and more users blocking tracking scripts, your sample sizes and attribution windows are shrinking.

Tip: Test larger signals. Don’t over-optimise for micro-conversions that might not even be trackable next month. Prioritise actions that map directly to revenue or qualified leads.

4. Velocity Beats Volume

In fast-moving markets, running smaller, faster tests (with directional insight) often beats waiting for perfect significance — as long as you know the trade-off.

Tip: Treat some tests as exploration, not final proof. Use them to inform big bets, not declare hard winners.

Testing is about balancing rigour with reality. Clean data is harder to get, and buyer journeys are messier than ever. That doesn’t mean stop testing — it means test smarter, with sharper expectations.

Building a Culture of Testing (Not Just Running Tests)

You can have the best tools and frameworks in the world, but if the culture’s wrong, your testing programme will stall.
Too many teams run tests like one-off campaigns instead of treating experimentation as part of how the business makes decisions.

Here’s what separates teams that test from teams that learn:

1. Obsess Over the “Why,” Not Just the “Win”

A test that loses but teaches you something is more valuable than a win you can’t explain.

Shift the language from “Did Variant B win?” to:

“What did we learn, and what should we try next?”

2. Share Results — The Good and the Ugly

Wins are easy to parade. But documenting failures, near-misses, and inconclusive results builds trust.
It also stops people repeating bad ideas.

Tip: Keep a shared doc or internal wiki of every test — what was tested, why, the result, and what you’d do differently.

3. Kill Ego, Not Curiosity

Avoid “HIPPO testing” — letting the Highest Paid Person’s Opinion decide what gets launched.
Let the data speak, but don’t stifle smart ideas either. The best culture supports curiosity and structure.

4. Make Testing the Default

You shouldn’t need a big kickoff to run an experiment. If a landing page is being updated, the first question should be:

“What’s our test plan?”

Normalise testing as part of delivery, not an extra step.

5. Train People to Interpret, Not Just Run

Running a test is easy. Reading it properly — knowing what’s real, what’s noise, and what to do next — that takes practice.

Build that skill across your team, not just in the marketing department.

The real power of testing comes when it shifts how decisions are made. Not just on your landing pages — across your campaigns, your sales copy, even your product. That starts with culture.

When Not to Test

Not every situation calls for an A/B test. In fact, forcing a test in the wrong context can waste time, mislead your team, and cost conversions.

Here’s when to hold back:

1. You Don’t Have Enough Traffic or Conversions

If your page only gets 200 visits a month and converts at 2%, you’re looking at 4 conversions a month. No statistical significance is coming — ever.

Better move: Run qualitative research instead. Watch session recordings. Collect surveys. Do user interviews. You’ll get more value, faster.

2. You’re Testing Too Big a Change

If you redesign the entire layout, change the copy, swap the offer, and launch it as “Variant B,” what exactly are you testing?

Better move: Split your rollout, measure impact on downstream metrics, or treat it as a new baseline and test micro changes from there.

3. You’re Mid-Campaign

Don’t A/B test during a major paid campaign push unless you’ve planned for it. Different traffic sources and intent can pollute results.

Better move: Wait until the campaign ends or isolate your paid traffic in a separate test environment.

4. You’re Just Doing It to “Be Data-Driven”

Not every decision needs a test. Sometimes the right move is obvious — like fixing a broken form or clarifying a vague headline.

Better move: Act. Save the testing muscle for things that truly need validation.

5. You Haven’t Done Any Research

If you don’t know what the user is struggling with, what message resonates, or what their journey looks like — testing is a shot in the dark.

Better move: Do your homework. Interviews, surveys, analytics deep-dives. Then test with purpose.

Wrap-Up

A/B testing landing pages isn’t about being clever — it’s about being disciplined.
Build the framework. Prioritise ruthlessly. Interpret honestly. And above all: treat it as a system for learning, not just “winning.”

If you’re serious about improving conversions, the structure you use to test is just as important as the variant you launch.

Brad Holmes

Brad Holmes

Web developer, designer and digital strategist.

Brad Holmes is a full-stack developer and designer based in the UK with over 20 years’ experience building websites and web apps. He’s worked with agencies, product teams, and clients directly to deliver everything from brand sites to complex systems—always with a focus on UX that makes sense, architecture that scales, and content strategies that actually convert.

Thanks Brad, I found this really helpful
TOP