A/B testing is an excellent research method for testing two alternative versions of a given solution at the same time. Read our article to learn how to conduct A/B tests and see their benefits and limitations.

A/B testing in UX – table of contents:

  1. What are A/B tests n the context of UX research?
  2. When to apply A/B testing?
  3. How to conduct A/B testing?
  4. Summary

What are A/B tests in the context of UX research?

A/B testing allows you to test two versions of a product/solution (version A and version B) and evaluate the one that wins over greater approval from users. The way to measure it includes conversion rate, the time spent on the site or participants’ feedback and their propensity to recommend the site/product. Before the test, you need to define and determine what “success” will mean for a particular version.

When to apply A/B testing?

You can deploy A/B tests for prototype testing, during the product development phase, as well as for building marketing and promotional strategies. They are the perfect tool for arriving at decisions that can affect an organization’s bottom line. A/B tests come in handy especially well when we already have a hypothesis based on previous research and want to confirm that it is the right solution. Research questions constructed for A/B testing might look like these:

  • Which version of the product generates a higher conversion rate?
  • Which of the two differently worded push notifications increases engagement in the app?

A sound A/B test should include as plain comparisons as possible, e.g. instead of comparing two completely different versions of the site, it is better to test two various header styles or two different distinct locations of the CTA button. With minor comparisons, we will precisely recognize which font, color, element, or location influences the UX most.

This research method comprises tests of two kinds: univariant and multivariant. The first one focuses on the differences between two variants of an item – for example, a red button and a blue button. The multivariant one, however, compares more than 2 variants of a button at the same time – e.g. red, blue, green, and white ( additionally, they can still differ in headings, e.g. “Check this” and “See more”).

The key upsides of A/B testing are swiftness and low costs. They also enable evaluating several product variants on a large group of real people. Still, be aware to focus on these aspects that can have a real impact on the overall perception of a product. Don’t compare random elements. Make a hypothesis, carry out other complementary research, then consult your design and development team. Together, you’ll settle which essential features to examine in numerous versions by conducting single-variant or multi-variant A/B tests.

A/B testing seems a quick form of research – though it’s not a rule. You may need to run them for as long as a few weeks to get enough data for UX analysis (but you can just as well get through a few days or even a few hours). The time it takes to run a survey depends on many factors.

A/B testing

How to conduct A/B testing?

  1. Identify your problem.
  2. Make sure to apply the right analytical tools to precisely establish the nature of the problem.

  3. Find out as much as you can about the problem as well as the users. Get a good feel for them.
  4. Pinpoint precisely the location of the flow and try to figure out why it happens. Its detailed understanding will contribute to a properly rigid analysis.

  5. Formulate a hypothesis by answering how to solve the problem.
  6. A hypothesis is a testable assumption. You can formulate it in the form of a condition – “if X happens then Z”, i.e., for example, “if the headline is in font 22 instead of 18, the conversion will increase”. A/B testing will let you know if the conjecture presented in the hypothesis is correct.

  7. Define your goal.
  8. Determine what you want to achieve with the study as well as through the entire research and design process – for example, you want more users to click on the CTA button on the homepage.

  9. Define statistical accuracy.
  10. Determine the numbers and figures you need for both the practical evaluation of the survey and for the business stakeholders to showcase – e.g., will a 2% increase in conversions satisfy them and be worth investing in a survey?

  11. Define the required scale of results.
  12. What number of respondents will ensure statistical accuracy? What percentage of the daily, weekly or monthly user base will make these results valuable and conclusive? It is imperative to determine this before proceeding with the survey.

  13. Create version B and test your hypothesis.
  14. Prepare an extra variant (variant B) of the site/product/functionality for your hypothesis and start testing. At this stage, developers step in to implement a second, alternative solution for the existing product – and users unknowingly split into two groups (group A and group B) the site/app as before. During the assessment, try to look at your data only after you have collected enough of it to get statistical validity and a viable result.

  15. Analyze and act on the test results.
  16. If your version B meets the established effectiveness threshold and they confirm your hypothesis, you can proceed to implement it for all users (no longer split between versions A and B). However, if the hypothesis is disproven, stay with the original version A or devise and test a new hypothesis. Also, check out alternative research methods to supplement the data.

Summary

A/B testing is a fairly technical subject. It necessitates possessing certain knowledge of statistics, as well as more specialized technical / programming know-how (or a good relationship with the company’s development team). It’s a direct method – on top of that it is quite simple, fast and cheap. It enables comparing two alternative versions of a product at little cost with satisfactory results. What’s more, its findings come out on the grounds of the real users, they are as precise as you can get. Still, remember that you can’t test every feature, element or tiny detail on the site – that’s why, when conducting A/B tests, it’s a standard to carry out other complementary research methods.

Read also: Discovery research methods

If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.

What is A/B testing in UX? | UX research #26 klaudia brozyna avatar 1background

Author: Klaudia Kowalczyk

A graphic & UX Designer which conveys into design what cannot be conveyed in words. For him, every used color, line or font has a meaning. Passionate in graphic and web design.

UX research:

  1. What is UX research?
  2. Types of UX research
  3. What are research questions and how to write them?
  4. Requirements gathering process for UI/UX projects
  5. Why are stakeholder interviews crucial for the design process?
  6. How to leverage our gathered customer data?
  7. How to create a good UX research plan?
  8. How to choose a research method?
  9. How can pilot testing improve UX research?
  10. UX study participant recruitment
  11. Channels and tools for finding UX research participants
  12. Screener survey for UX Research
  13. UX Research Incentives
  14. UX research with children
  15. Discovery research methods
  16. What is desk research?
  17. How to conduct user interviews?
  18. How to conduct diary studies?
  19. What are focus groups in research?
  20. What is ethnographic research?
  21. Survey research
  22. What is card sorting in UX?
  23. What is evaluative research?
  24. How to conduct usability testing?
  25. When and how to run preference testing?
  26. What is A/B testing in UX?
  27. Eyetracking in UX testing
  28. What is tree testing?
  29. First click testing
  30. What is task analysis in UX research?
  31. Evaluation of emotions in UX
  32. Continuous Research in UX
  33. Data analysis in UX research
  34. How to prepare a UX research report?
  35. Customer Journey Map – what is it and how to create it?