A/B testing is an excellent research method for testing two alternative versions of a given solution at the same time. Read our article to learn how to conduct A/B tests and see their benefits and limitations.
A/B testing in UX – table of contents:
- What are A/B tests n the context of UX research?
- When to apply A/B testing?
- How to conduct A/B testing?
What are A/B tests in the context of UX research?
A/B testing allows you to test two versions of a product/solution (version A and version B) and evaluate the one that wins over greater approval from users. The way to measure it includes conversion rate, the time spent on the site or participants’ feedback and their propensity to recommend the site/product. Before the test, you need to define and determine what “success” will mean for a particular version.
When to apply A/B testing?
You can deploy A/B tests for prototype testing, during the product development phase, as well as for building marketing and promotional strategies. They are the perfect tool for arriving at decisions that can affect an organization’s bottom line. A/B tests come in handy especially well when we already have a hypothesis based on previous research and want to confirm that it is the right solution. Research questions constructed for A/B testing might look like these:
- Which version of the product generates a higher conversion rate?
- Which of the two differently worded push notifications increases engagement in the app?
A sound A/B test should include as plain comparisons as possible, e.g. instead of comparing two completely different versions of the site, it is better to test two various header styles or two different distinct locations of the CTA button. With minor comparisons, we will precisely recognize which font, color, element, or location influences the UX most.
This research method comprises tests of two kinds: univariant and multivariant. The first one focuses on the differences between two variants of an item – for example, a red button and a blue button. The multivariant one, however, compares more than 2 variants of a button at the same time – e.g. red, blue, green, and white ( additionally, they can still differ in headings, e.g. “Check this” and “See more”).
The key upsides of A/B testing are swiftness and low costs. They also enable evaluating several product variants on a large group of real people. Still, be aware to focus on these aspects that can have a real impact on the overall perception of a product. Don’t compare random elements. Make a hypothesis, carry out other complementary research, then consult your design and development team. Together, you’ll settle which essential features to examine in numerous versions by conducting single-variant or multi-variant A/B tests.
A/B testing seems a quick form of research – though it’s not a rule. You may need to run them for as long as a few weeks to get enough data for UX analysis (but you can just as well get through a few days or even a few hours). The time it takes to run a survey depends on many factors.
How to conduct A/B testing?
- Identify your problem.
- Find out as much as you can about the problem as well as the users. Get a good feel for them.
- Formulate a hypothesis by answering how to solve the problem.
- Define your goal.
- Define statistical accuracy.
- Define the required scale of results.
- Create version B and test your hypothesis.
- Analyze and act on the test results.
Make sure to apply the right analytical tools to precisely establish the nature of the problem.
Pinpoint precisely the location of the flow and try to figure out why it happens. Its detailed understanding will contribute to a properly rigid analysis.
A hypothesis is a testable assumption. You can formulate it in the form of a condition – “if X happens then Z”, i.e., for example, “if the headline is in font 22 instead of 18, the conversion will increase”. A/B testing will let you know if the conjecture presented in the hypothesis is correct.
Determine what you want to achieve with the study as well as through the entire research and design process – for example, you want more users to click on the CTA button on the homepage.
Determine the numbers and figures you need for both the practical evaluation of the survey and for the business stakeholders to showcase – e.g., will a 2% increase in conversions satisfy them and be worth investing in a survey?
What number of respondents will ensure statistical accuracy? What percentage of the daily, weekly or monthly user base will make these results valuable and conclusive? It is imperative to determine this before proceeding with the survey.
Prepare an extra variant (variant B) of the site/product/functionality for your hypothesis and start testing. At this stage, developers step in to implement a second, alternative solution for the existing product – and users unknowingly split into two groups (group A and group B) the site/app as before. During the assessment, try to look at your data only after you have collected enough of it to get statistical validity and a viable result.
If your version B meets the established effectiveness threshold and they confirm your hypothesis, you can proceed to implement it for all users (no longer split between versions A and B). However, if the hypothesis is disproven, stay with the original version A or devise and test a new hypothesis. Also, check out alternative research methods to supplement the data.
A/B testing is a fairly technical subject. It necessitates possessing certain knowledge of statistics, as well as more specialized technical / programming know-how (or a good relationship with the company’s development team). It’s a direct method – on top of that it is quite simple, fast and cheap. It enables comparing two alternative versions of a product at little cost with satisfactory results. What’s more, its findings come out on the grounds of the real users, they are as precise as you can get. Still, remember that you can’t test every feature, element or tiny detail on the site – that’s why, when conducting A/B tests, it’s a standard to carry out other complementary research methods.
Read also: Discovery research methods
If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.
- What is UX research?
- Types of UX research
- What are research questions and how to write them?
- Requirements gathering process for UI/UX projects
- Why are stakeholder interviews crucial for the design process?
- How to leverage our gathered customer data?
- How to create a good UX research plan?
- How to choose a research method?
- How can pilot testing improve UX research?
- UX study participant recruitment
- Channels and tools for finding UX research participants
- Screener survey for UX Research
- UX Research Incentives
- UX research with children
- Discovery research methods
- What is desk research?
- How to conduct user interviews?
- How to conduct a diary studies?
- What are focus groups in research?
- What is ethnographic research?
- Survey research
- What is card sorting in UX?
- What is evaluative research?
- How to conduct usability testing?
- When and how to run preference testing?
- What is A/B testing in UX?
- Eyetracking in UX testing
- What is tree testing?
- First click testing
- What is task analysis in UX research?