Applications of A/B testing in Email Marketing
A/B testing is the most common mode of testing among marketers because of its simplicity and effectiveness in validating hypotheses involving a single variable.
A/B testing consists of confronting two or more versions of the same message with the particularity that the same variable will change in each of the different versions. Examples of A/B testing are:
Subject.
Long vs. short
Personalised vs. unpersonalised
Mentioning the offer vs. mentioning the brand
Imperative vs declarative
Etc...
Offers
Discount vs. in-kind incentive
Discounted price vs. free shipping costs
Incentive A vs. Incentive B
Etc...
CTA's
Copy: imperative vs declarative
Personal vs. impersonal
Including the benefit vs. not including the benefit
Different designs and colours
Etc.
Images and colours
From Email
Another element to test is the from email in both the sender's name and the email. So, for example, we can test the From Name:
Real name of a person vs. brand name
Brand name vs. product name
Time of dispatch
One hour in the morning vs. one hour in the afternoon
One day of the week vs. another day of the week
For A/B testing to be effective, the following points must be taken into account:
- We only need to introduce a single variable in the test, otherwise the conclusions we reach may be wrong. Let's imagine that we want to test which of two issues gives us a higher open rate. If we launch version A at 09.00 and version B at 12.00, we will not know if the increase in opens of the winning version is due to the subject or the time at which the campaign was launched.
- The A/B testing of copies, calls to action, offers and images has to be checked with the Click Through Rate and/or the Click To Open Rate (also called reactivity).
- When a version is a winner after a test, it should be used as an alternative version for a new test. In this way, improved versions will always be competing, so that the improvements in results will be sustained.
Once we have the results, we have to make sure that the differences between the winning version and the rest are statistically significant. To do this, we have to resort to statistical calculations as we see below:
Let's assume that version A of a creative has generated 163 sales and version B 210. To find out if the confidence interval is significant we apply the following calculation: