<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1360553&amp;fmt=gif">
Log in
Try Privy Free

Conversion Testing Basics

Before you jump into the deep end of the testing pool, there are some important things to think about that will help you understand your results and what to do next.  

Develop a Hypothesis 

The first thing to do before launching any test is to come up with a hypothesis. Basically, what do you think will happen and how will you go about proving it?

For example, imagine testing out a button color. You may have the hypothesis that the blue button will drive more conversions than the green one. This is important because it forces you to have a reason to run a test instead of saying, “it would be interesting to know…” You’re going to reach a firm point of view once you have the data in hand.

With that hypothesis in mind, you can then have clear decisions that you know you’ll make ahead of time depending on the results you get. For example, if the blue button wins, you will change all of the buttons to blue in your pop-ups.

I know, that seems obvious but trust me, it’s really helpful.

When you’re thinking about any test, it’s also important to limit the number of variables to one. If you change a bunch of elements in your campaign all at once, you’ll never know exactly what drove the change in results. For example, if you’re running that same button color test, but you also change the copy of your pop up, how will you know whether it was the button or the copy that drove the difference in results?

Does that mean you can’t test full sets of creative (image + color + copy) against each other? No. It just means you need to be conscious of what you are learning and how you apply it to other related items. So, you can now that one full pop up performed better than another but you’ll want to avoid taking a single element of the pop up, like a button color, and making the leap to site-wide changes.

Directional vs. Statistically Significant

Data purists will tell you that the only reliable tests are ones that are statistically significant. That basically means that enough people have been a part of the test to make the results relevant to a broader sample. You can read way more info about that here. The important thing is that whenever possible, you want a large sample set that can reach the point of being trustworthy.

Unfortunately, most of us don't have the volume of web traffic that make running those types of tests practical. That's totally fine—you can run directional tests instead that can still be incredibly valuable, even if not 100% reliable.

Think about it this way. Would you be better off asking 30 friends a question to see what a majority of people think, or would you rather just trust your gut? While the results of that questioning might not be bullet proof, they certainly should help shape your opinion about what to do next.

A/B Tests vs. Sequential Tests

If you’re new to the testing game, you might be wondering what an A/B test is. It’s actually very straightforward. In an A/B test, you create two versions of something, like a pop up or landing page, ideally with only one variable changed, and split your web traffic at random to send a certain percentage of people to one page vs. the other. Then, you evaluate which version drive more conversions and pick a winner.  

The great thing about an A/B test is that it automatically accounts for all other factors because the only difference between one set of visitors and another is what they are seeing on your site. The time of year is the same and your offer is the same. The weather is the same. You get the idea. You’re limiting the outside influences that impact the results of your test.

Sequential testing, on the other hand, means that you are doing one thing for a period of time. You make some changes and leave them for the same period of time. Then, you compare the results. This is easy to execute, but harder to correctly analyze, because any number of other things could have impacted the results that are out of your control.

So, which is better?

In a perfect world, we would all be running statistically significant A/B tests and we’d be learning and improving rapidly. The next best scenario is to run directionally valid A/B tests. The last choice (that's still way better than nothing) is to run sequential tests. You can still learn a lot if you combine your instincts with the results.

Need some inspiration? In the next chapter, we'll talk about six conversion tests worth trying to get you started.


Looking for Help docs?
Click here.