With the many advantages that digital marketing has brought to B2B marketers over the years, one of the rarely discussed advantages is the improved ability to perform test campaigns to help boost the success of your marketing efforts.
With traditional marketing tactics (“analog marketing”) such as direct mail, it could be both expensive and time consuming to design and print small quantities of various test mailers—each with variations meant to allow you to measure how changes impact response rates. With print ads, outdoor advertising, and other analog marketing vehicles, there were often too many obstacles and expenses to allow small-scale test campaigns.
In contrast, there are few excuses these days for not performing test campaigns for email or other digital marketing efforts. (When there is a challenge, it’s often simply a lack of time typically caused by not planning far enough ahead of a key announcement or event date.)
A common mistake in testing marketing campaigns is often fueled by the desire to test a number of variables. In the example of an email campaign, you may want to test two or three different versions of these variables:
It’s very tempting to want to test more than just one of these factors. Don’t. Testing more than one variable at a time does not provide you with the baseline control required to determine which change has altered your response results.
For example, let’s say we send out two different email variations (version A and version B) to randomly selected test lists of equal size and other parameters. Version B has both a different main graphic and a different call-to-action. After the test launch, results show that version B has a 35% higher response rate than version A.
It certainly looks like version B is the better email. But how do we know whether it was the change in the graphic, the change in the call to action, or a combination of both changes that was the true cause of this increased response rate? We don’t.
Here’s the challenge: It’s quite possible that only one of those changed variables—for example, the call-to-action—was responsible for the improved results. It’s also possible that leaving the other variable alone (i.e., not changing the email graphic) could have even further improved our results.
But we’ll have no way of knowing this if we don’t test each variable separately.
You’re about to launch a different A/B email campaign test, but this time with only one variable altered: the email body copy. You launch version A on Tuesday at 9:00 a.m., and then launch version B a few hours later.
Here’s the problem: you once again do not have a proper single-variable test. Version B was launched a few hours later than version A, so you won’t be able to determine whether any change in the response rate was due to the change in the body copy—or the change in the timing.
You need to always remember that time is also a variable—one that must be factored into your campaign test plans. Unless you’re specifically selecting time as your test variable, always launch your two test campaigns simultaneously (or at least within minutes—not hours).
The bottom line is: take advantage of the low-cost opportunity that digital marketing allows for test campaigns. But be careful to isolate your variables (including the time) and test them one at a time to ensure you have a base-level comparison that will allow you to make actionable conclusions from your test results.