Reading Time: 2 minutes

There’s no disputing that A/B testing can be a fruitful way for companies to determine which design and language leads customers act. But after a recent study we conducted about test complexity, we learned something surprising: Complex tests are no more or less likely to win than easy ones.

Complexity Doesn't Guarantee Results

Before we explain what we mean here, let’s back up.

As you know, we implement A/B tests for clients and these tests vary wildly in complexity; some are simple copy changes while others are drastic redesigns in site functionality, dependent on external web services we custom-build to make the test work. What’s more, some A/B practices work better than others. We discussed some of the best “best practices” for companies to follow with regard to A/B testing in this post.

A few months ago, we got curious about how the complexity of an A/B test impacts the outcome, so we did a little digging.

First, we polled the engineers and product managers responsible for 154 different A/B tests across a representative sample of our clients, and we asked them to rate the test complexity on a scale of 1 to 3. Tests ranked 1 were the simplest, and took a couple of hours to build
Tests ranked 3 were the most complicated, and often took more than six hours. We also had these experts rate each test for “winningness.” Here, the scoring key was a bit different:

1 – Not a win
2 – Not a win but informative
3 – Win

Next, we tabulated our results. On one hand, our product managers and engineers synced up nicely on rating test complexity—a correlation of +0.57 indicated that they agreed almost across the board. On the other hand, we determined that test complexity did NOT dovetail with wins—the correlation number for this equation came out to -0.15. Math geeks would argue that this number is “weakly” negative but close enough to zero that it might as well not exist. We concur.

This is what led us to our conclusion that complex tests are no more or less likely to win than easy ones, that technical complexity has no strong influence on outcome and it also is what has led us to search more aggressively for ways to simplify test design while answering the same underlying questions.

In a nutshell, our findings have prompted us to redouble efforts to diversify our tests overall.

Don’t get us wrong—we still embrace complex tests, and our ability to implement them is part of what sets Cro Metrics apart from other agencies and clients’ internal teams. We also now make strong distinctions between “drastic” tests (which can be simple to implement and often are the best kind of tests for clients with low traffic and/or conversion rates) and complex ones. Our recent research proved that both A/B testing approaches are good, that both can yield wins. With this in mind, diversify your testing. Run some big ones and small ones, hard ones and easy ones. Whatever you do, keep up the pace of testing in order to improve results.