Why I don’t believe in A/B tests (and what to do instead)

Every marketing team wants to be “data-driven.” Sometimes, running A/B tests feels like a big step in this direction. In this post, I’ll share why A/B tests for B2B startups selling to enterprise clients are pretty much the opposite of being data-driven, are not useful, and can actually cause harm. I’ll also suggest how to better direct your efforts.

Before we dive in, note that my argument mainly applies to early-stage B2B startups selling to enterprise clients. If you’re a B2C company, or a B2B selling to SMBs, A/B tests can be useful. When I worked at a large e-commerce retailer whose name rhymes with glamazon, A/B tests were done religiously, to great effect. So I’m not categorically opposed to them, I just don’t think they’re effective for early-stage enterprise B2B startups.


Why A/B tests don’t work for B2B Enterprise Startups

  • You don’t have enough data. In order for your test results to be statistically significant, you need enough data to run the test on (e.g. email recipients, website traffic). Usually, for small startups with an email database of a few thousand leads and website traffic of a few thousand sessions per month, it’ll be hard to get to statistical significance on most A/B tests. This is because when the difference between the two test group results’ is small, you need hundreds of thousands of events to hit statistical significance. For only thousands of clicks or conversions, the variation will make the exercise meaningless.
    For email marketing tests, usually the only thing that’s worth testing is the subject line, where the number of openers is large enough for the test results to be significant. It’s harder to achieve significance with email copy tests because of the email funnel. With average CTRs of 1-5%, any tiny difference in the absolute number of clickers (even 2-3 people) will widely sway the resulting test CTRs — but not in a statistically significant way. It’s the same with testing a website’s conversion rate; you’ll need to wait far too long for website traffic to hit statistical significance (we’re talking months or even longer).
  • You’re not setting up the tests correctly. No offense, but you’re likely doing it wrong. I know because I’ve been there and done that, even when I was working for extremely quantitative, data-driven companies. Common mistakes I’ve seen (and made):
    • Not testing one thing at a time. An A/B test must change only one element at a time, otherwise you won’t know which element impacted the results. So if you’re testing email subject lines, the email copy must be the same. If you’re testing Hubspot vs Outreach as email platforms, you have to send identical emails to randomly selected audiences. Which brings me to…
    • Not randomizing test groups. If your audiences are different, they might react differently and you won’t know whether they’re reacting this way because of the variable you’re testing or because it’s a different audience. You must create two purely random groups as your A/B test audiences.
    • Not documenting the results. Even if you correctly set up your test and got beautifully conclusive results, all too often people keep the results in their email or web platform, where they languish, get forgotten, and don’t get applied going forward. Maintaining a test result document is a step in the right direction.
      Don’t get me wrong: You’re not making these mistakes because you’re dumb or ill-intentioned. Instead, it’s because deriving proper statistics is notoriously time-consuming, and startups typically invest their scarce resources elsewhere—justifiably so!
  • Even if you did everything right, how useful are the results? Suppose you A/B tested two subject lines:
    Check out our latest product release 💥
    Vs
    Check out our latest product release

The test showed that the open rate for the first subject line with the emoji is +10% higher. Does this mean that all emojis perform better? That you should include an emoji in every single subject line from now on? That in six months from now, this will still be the case?

Suppose another A/B test showed that including the recipient’s first name in the SL (a popular A/B test) leads to higher open rates. Does this mean you should include the recipient’s first name in every single email you send from now on? That will probably get old pretty fast.

You’re using an A/B test to avoid difficult decisions.

You went through a positioning workshop and came out with two messaging options but no clear winner. “Let’s A/B test!” someone is bound to suggest. Unfortunately, it’s not going to work because of all the reasons above. Some decisions you just have to make — based on intuition (yes!), conversations with your stakeholders (clients, prospects, team members), market research, and so on.

What can you do instead of A/B testing?
My recommendation is to invest the hours you would have spent on the A/B test in improving the campaign itself. Ask yourself: Is the content valuable? Is my argument cogent? Is the topic original?

Another idea more worthy of your time? Talk to your customers. Instead of A/B testing on 5,000 emails, talk to one or two of your best customers. “Hey El-ad, we’re thinking of launching this campaign, here are two ideas, which one is more interesting?”

People love to be heard and to feel that their ideas matter. This one call will probably be more effective than all dubious A/B statistics you’ll gather. (Hat tip to El-ad David Amir, data scientist and founder of Astrolabe. This idea means even more coming from a data scientist!)

Invest in these underlying factors that can 10X your results, instead of tinkering with a subject line to add 2% more to your open rate. Your audience will think more highly of you if you present a more thoughtful idea, as opposed to a call-to-action that’s been tweaked to say “learn more” instead of “read more.”

To illustrate our point: If you’re a vegetarian, no fancy subject line is going to make you open this email:

Cooking your steak to perfection
Cooking your 🥩 to perfection
Charles, cook your steak to perfection

But this one might:
10 recipes both vegetarians and carnivores will love


The bottom line

When you’re just starting out and have limited data and resources, spend more time on substance, less time on refinement. And don’t be afraid to proudly proclaim you’re not doing A/B tests. Like many other things in marketing, a lot of people probably agree with you but are afraid to say it.


Learn more

Netta is the founder and CEO of Blue Seedling. She loves third wave coffee, thin crust pizza, and B2B marketing.

Sign up to be notified when we publish new posts:

More from the Blog

Want to explore working together?

Get in touch arrow_forward