You can take advantage of our platform's A/B split test campaigns to increase the impact of your campaigns. Using A/B split testing, you can improve your campaigns, test different versions of your designs, and get to know what your subscribers prefer.
When creating an A/B split test campaign, you can set the criteria to define the winning version. You divide your recipients into two groups: a test group and winner group. The test group receives the different campaign versions and based on how they react, the winning version is determined. This version is then sent to the rest of the email list, called the winner group after the A/B split test has been completed.
So for example, when creating an A/B split test campaign, part of the process is setting the percentage of subscribers in the test group. In this example, you set 10% of your subscribers to receive Campaign A and 10% for Campaign B. The rest of the subscribers, 80% receives the version that wins between Campaign A and B.
You also define the criteria for the performance of the winning campaign:
- The winning campaign is based on the highest unique opens rate
- The winning campaign is based on the highest unique clicks rate
In the above example, the campaign with the highest number of unique opens wins.
Lastly, you need to set the duration of the test before a winning version is decided. The maximum duration for the test is 24 hours, whereas the minimum is 1 hour.
What this means is the winning version is determined based on a campaign's performance for the test duration you have set. For example, if you set the A/B test to run for one hour and after an hour, Campaign A has better results than Campaign B, then it becomes the winning version. Any campaign statistics received after that timeframe is ignored. So even if Campaign B performs better after two hours, these results are ignored because they are outside the test timeframe.