It’s been some time since I have written about UX so here is a new post on testing. For those interested, here’s an introduction to A/B and Multivariate testing for website optimisation.
What is A/B testing?
A/B testing is a classic methodology to test UX, design choices and user preferences in order to increase conversion rates. Your conversion goal could be anything from a CTA (call to action) button such as newsletter sign-ups or form submission, to watching a video or social share.
Often, you can start with A/B testing on a landing page or a HTML email newsletter.
A/B testing is conducted on one variable at a time. It could be:
- Colour of the CTA button: the same colour as your links OR a striking red
- Placement of the CTA button: directly below the form OR also above
- Social share button: which size or what messaging
- Social share button: static or moving GIF
- Feature image of the video on which you want users to watch
As the name suggests, A/B testing is conducted on half of your users for each of your variable choice: 50% is shown Option A and 50% is shown Option B.
If you are testing a HTML newsletter, you can see the results in the report that helps you to determine which UX choice leads to higher conversion rate.
If you are testing a landing page, then you let your test run for a period of time, after which you can check your analytics report to understand what works better.
But sometimes, A/B testing can be slow, especially when you are pressed for time to increase your conversion rate in order to meet your goals within the timeframe of your marcom campaign. This is where multivariate testing comes in.
What is Multivariate testing?
Multivariate testing is like A/B testing, except that you test on multiple variables instead of just one.
Taking the examples listed above, you could combine a few variables, such as to test the colour of the CTA button AND test its position. So you have four scenarios:
- Green CTA button below the form
- Red CTA button below form
- Green CTA button above and below the form
- Red CTA button above and below the form
Unlike A/B testing, multivariate testing requires you to divide your users according to the number of scenarios, which in this case is four: 25% for each scenario.
So depending on the number of scenarios, your users could be divided into sixths (16.7%), eigths (12.5%) or tenths (10%). Bear in mind this will affect perception of your website as whether the test scenarios might depart too much from your organisation’s branding. I would not go beyond 12.5% to avoid radical difference of brand perception.
This brings us to the differences between A/B testing and multivariate testing?
A/B testing versus multivariate testing
So when do you use A/B testing and when do you use multivariate testing?
The rule of thumb is certainly sample size. You need a huge size in order to make the right decision for your UX decision, so that whatever change to make will certainly increase the conversion rate to help you reach your goal.
Therefore, for a UX testing to be effective, you need to do the test according to your site traffic.
If you are running a low traffic site, run an A/B test.
If you are running a high traffic site, do a multivariate test.
There are many definitions of site traffic, but I usually go by the number of users (page loads, not visits) on a website on any given moment:
- Low traffic: 0-50
- Medium traffic: 50-5000
- High traffic: > 5000
What are your experiences with A/B testing and multivariate testing? Let me know your thoughts in the comments section below!
Eureka Moments are not so much moments of sudden realisation or enlightenment like Archimedes. They are moments while I am in my commute when I get to reflect on things that someone mentioned to me, things that I am confronted with, things that I or others have sought a solution for. They are more ‘oh I get it’ rather than ‘I have discovered it’.