But before we dive in, you may be wondering “what is an AB test”. Good question! Put simply - it’s when we split a segment of data (like customers) into two groups, and give them two experiences to see which they respond better to.
So - why do we AB test?
You might initially think to improve something, and I’d say you’re partially correct. There may be many reasons why we test, but the main reason is to reduce uncertainty. For example:
Recently I was consulting on a email campaign, and we were uncertain about whether adding the webinar date in the subject line would help improve registrations. We thought that adding a date would help - but we didn’t know this for sure.
So to test our thinking, we performed a straight forward AB test, comparing the open rates of these two subject line variations:
The result? Turns out the email without the date performed better in every metric. This is great learning for our clients and something they can take going forward.
There’s an overhead to testing, I’d say the guidelines for testing would be:
Join me next time for some random statistics musings. Perhaps I’ll go into the truly elegant math behind testing…