Blog

What is A/B testing in UX? | UX research #26

A/B testing is an excellent research method for testing two alternative versions of a given solution at the same time. Read our article to learn how to conduct A/B tests and see their benefits and limitations.

A/B testing in UX – table of contents:

  1. What are A/B tests n the context of UX research?
  2. When to apply A/B testing?
  3. How to conduct A/B testing?
  4. Summary

What are A/B tests in the context of UX research?

A/B testing allows you to test two versions of a product/solution (version A and version B) and evaluate the one that wins over greater approval from users. The way to measure it includes conversion rate, the time spent on the site or participants’ feedback and their propensity to recommend the site/product. Before the test, you need to define and determine what “success” will mean for a particular version.

When to apply A/B testing?

You can deploy A/B tests for prototype testing, during the product development phase, as well as for building marketing and promotional strategies. They are the perfect tool for arriving at decisions that can affect an organization’s bottom line. A/B tests come in handy especially well when we already have a hypothesis based on previous research and want to confirm that it is the right solution. Research questions constructed for A/B testing might look like these:

  • Which version of the product generates a higher conversion rate?
  • Which of the two differently worded push notifications increases engagement in the app?

A sound A/B test should include as plain comparisons as possible, e.g. instead of comparing two completely different versions of the site, it is better to test two various header styles or two different distinct locations of the CTA button. With minor comparisons, we will precisely recognize which font, color, element, or location influences the UX most.

This research method comprises tests of two kinds: univariant and multivariant. The first one focuses on the differences between two variants of an item – for example, a red button and a blue button. The multivariant one, however, compares more than 2 variants of a button at the same time – e.g. red, blue, green, and white ( additionally, they can still differ in headings, e.g. “Check this” and “See more”).

The key upsides of A/B testing are swiftness and low costs. They also enable evaluating several product variants on a large group of real people. Still, be aware to focus on these aspects that can have a real impact on the overall perception of a product. Don’t compare random elements. Make a hypothesis, carry out other complementary research, then consult your design and development team. Together, you’ll settle which essential features to examine in numerous versions by conducting single-variant or multi-variant A/B tests.

A/B testing seems a quick form of research – though it’s not a rule. You may need to run them for as long as a few weeks to get enough data for UX analysis (but you can just as well get through a few days or even a few hours). The time it takes to run a survey depends on many factors.

How to conduct A/B testing?

  1. Identify your problem.
  2. Make sure to apply the right analytical tools to precisely establish the nature of the problem.

  3. Find out as much as you can about the problem as well as the users. Get a good feel for them.
  4. Pinpoint precisely the location of the flow and try to figure out why it happens. Its detailed understanding will contribute to a properly rigid analysis.

  5. Formulate a hypothesis by answering how to solve the problem.
  6. A hypothesis is a testable assumption. You can formulate it in the form of a condition – “if X happens then Z”, i.e., for example, “if the headline is in font 22 instead of 18, the conversion will increase”. A/B testing will let you know if the conjecture presented in the hypothesis is correct.

  7. Define your goal.
  8. Determine what you want to achieve with the study as well as through the entire research and design process – for example, you want more users to click on the CTA button on the homepage.

  9. Define statistical accuracy.
  10. Determine the numbers and figures you need for both the practical evaluation of the survey and for the business stakeholders to showcase – e.g., will a 2% increase in conversions satisfy them and be worth investing in a survey?

  11. Define the required scale of results.
  12. What number of respondents will ensure statistical accuracy? What percentage of the daily, weekly or monthly user base will make these results valuable and conclusive? It is imperative to determine this before proceeding with the survey.

  13. Create version B and test your hypothesis.
  14. Prepare an extra variant (variant B) of the site/product/functionality for your hypothesis and start testing. At this stage, developers step in to implement a second, alternative solution for the existing product – and users unknowingly split into two groups (group A and group B) the site/app as before. During the assessment, try to look at your data only after you have collected enough of it to get statistical validity and a viable result.

  15. Analyze and act on the test results.
  16. If your version B meets the established effectiveness threshold and they confirm your hypothesis, you can proceed to implement it for all users (no longer split between versions A and B). However, if the hypothesis is disproven, stay with the original version A or devise and test a new hypothesis. Also, check out alternative research methods to supplement the data.

Summary

A/B testing is a fairly technical subject. It necessitates possessing certain knowledge of statistics, as well as more specialized technical / programming know-how (or a good relationship with the company’s development team). It’s a direct method – on top of that it is quite simple, fast and cheap. It enables comparing two alternative versions of a product at little cost with satisfactory results. What’s more, its findings come out on the grounds of the real users, they are as precise as you can get. Still, remember that you can’t test every feature, element or tiny detail on the site – that’s why, when conducting A/B tests, it’s a standard to carry out other complementary research methods.

Read also: Discovery research methods

If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.

Author: Klaudia Kowalczyk

A graphic & UX Designer which conveys into design what cannot be conveyed in words. For him, every used color, line or font has a meaning. Passionate in graphic and web design.

Klaudia Kowalczyk

A graphic & UX Designer which conveys into design what cannot be conveyed in words. For him, every used color, line or font has a meaning. Passionate in graphic and web design.

Recent Posts

Sales on Pinterest. How can it help with building your e-commerce business?

Pinterest, which made its debut on the social media scene a decade ago, never gained…

4 years ago

How to promote a startup? Our ideas

Thinking carefully on a question of how to promote a startup will allow you to…

4 years ago

Podcast in marketing: what a corporate podcast can give you

A podcast in marketing still seems to be a little underrated. But it changes. It…

4 years ago

Video marketing for small business

Video marketing for small business is an excellent strategy of internet marketing. The art of…

3 years ago

How to promote a startup business? Top 10 pages to upload a product

Are you wondering how to promote a startup business? We present crowdfunding platforms and websites…

3 years ago

How to use social media to increase sales?

How to use social media to increase sales? Well, let's start like that. Over 2.3…

3 years ago