Split testing, also sometimes known as A/B testing, is one of the cornerstones of effective marketing.
In it, you test two ideas against each other. It could be two different ads, two versions of a web page, you name it. Measure customer response to determine which is better at achieving your goals, whether they be more signups, more inquiries, more sales, longer time on your website or whatever.
Keep the better version, throw out the “loser” and then try a new idea against the winner.
You may have a winner which stands for many years and beats all challengers or you may have a new winner every week. Either way, you will know with certainty that the ad you are using is the best you can come up with.
Of course I’m glossing over a great deal of hard work and fine detail. There’s formulating a marketing goal, knowing what to track and how to track it, how long do you run the test, how do you allocate the “load” and so on.
Let’s look at that last one a bit.
Say you already have an established winner (called a “control” among professional marketers) that has proven itself over time. Now you have a new challenger which presents a completely new idea and you want to see how well it resonates with your customers compared to your control.
You wouldn’t want to risk 50% of your customers on a gamble but you do want to test the challenger on a large enough sample to make it statistically valid. Depending on the size of your customer base, it’s normal to show the challenger piece to between 5% and 20% of your customers while the rest continue to get the control version.
Split testing requires an incredible amount of meticulous tracking and record keeping. It really takes a full-time, dedicated person to do it right and most big marketers have entire staffs fully dedicated to split testing. That’s a huge commitment but the payoff is more effective marketing and more sales.