Targeting Your Marketing Online
Some years ago, my wife and I were looking forward to our second annual deer hunt together. She had a deer hunting tag, and I was being cast in the role of pack mule.
At 2 a.m. on the morning of the hunt, her new scope lay in its box instead of being attached to the rifle. Some days earlier, it should have been “sighted-in” – adjusted so the point of impact of a shot, at a certain range, would be the point you’re aiming at. Waiting until you’ve actually spotted a deer is no time to make assumptions as to how you might have to compensate for the trajectory of the bullet.
But, never mind. Cinda’s rifle stayed home, and off we went. At the hunt, our friend Paul – who is about the best hunter I’ve ever met (some believe he even thinks like an elk) – loaned Cinda a rifle and we got started.
The sun beat down as our aching bodies dragged our rifle, packs, and hopes through 12 grueling hours without seeing a deer. Finally, the fading light brought some respite to our tired eyes, beckoning us to our car and the welcoming local bar with the promise of ice-cold beer to slake our thirst.
“Ssshh! Stop!”
We dropped to our knees, and then crawled on our bellies for about 50 yards, positioning ourselves 306 yards (our rangefinder said) across a valley from the opposite hillside. Breathe in. Settle. Adrenaline kicks in.
Perfect.
“Where is this gun sighted-in to?” Cinda asked.
“I don’t know. Didn’t you ask Paul?”
“No, I thought you did!”
The deer stood quietly, eating clover in the field and unaware of our distant fumbling.
“Do you have your cell phone on you?”
“Yes, in my pack. But why do you want that?”
“Paul. Hi,” I whispered into the phone. “Where is this gun sighted-in to?”
“300 yards.”
“300 yards. Aim center…”
Every aspect of this story (and its outcome) would be all but impossible to repeat, but there is a lesson we can learn from it: Making sure that you test your ideas and assumptions before you really need them in “real world” situations will help ensure your success.
I remembered this story a few days ago, while thinking about the number of questions we are receiving about our forthcoming Internet conference – in particular about how to do tests to see whether your product or advertisement will succeed and be profitable.
“How do you test the product to see how big the market is and at which price?” “How do you test the profitability of a market before you roll out a large-scale campaign?” “How do you test your ads and mail pieces?”
Any “classic” direct-mail marketer will tell you that you never roll out your entire mailing in one go. Always test what works and then mail your most successful test piece to increasingly larger sections of your list.
One thing about doing business online is the ease and almost-zero cost of doing direct-mail style testing like this to see what works.
But how do you “do testing.”? And what, actually, are you testing?
There are many things that you can test in an e-mail promotion, including:
What does the subject line say? Is the e-mail personalized? (“Dear Fred”) Is the e-mail in plain text or formatted? Does the e-mail contain images? Does the e-mail contain an offer for just one product? What is the price of the product? Is the copy long or short? Are orders being directed to a website only or are you including a phone number, too? By testing many variables, you start to get an idea about which ones influence your promotion. You then start to roll out the most successful elements to more and more people on your list.
It seems simple. So… away you go. Lock and load.
Measure Twice, Cut Once
When designing tests, it’s important to make sure you are measuring the results in the right way. (Marketers call these data “metrics” – and are somewhat fond of using that word.) You should almost always test only one element within each segment (part or split) of your e-mail campaign. This is so you can pinpoint the reason for any difference in response rate.
Let’s say, for example, that you take your e-mail list and split it randomly into two parts. To Segment A, you e-mail Subject One, Heading One. You include images and make the price $79. Segment B gets Subject Two, Heading Two. You do not include images. The price is $49.
Segment B brings in 30 orders; Segment A, only 10 orders. Segment B, you conclude, is the clear winner. You also decide that it was the reduced price in the Segment B mailing that produced the better result.
A-hunting We Shall Go…
You reload your marketing gun with the original Segment B copy, take aim, and fire at the original Segment A list. You miss the target, and your campaign bombs. Only two orders from the entire mailing. Naturally, you are left scratching your head… wondering why this happened. “After all,” you think, “my mailing to the Segment B list was so much more successful.”
The problem stems from the fact that in your original test, you did not sight-in your gun properly.
When sighting-in a gun, you have to make sure that all the variables (things that could change) are kept to a minimum. The bullet weight, target distance, gun caliber, wind speed and direction, etc. Lots to think about. But practice really does make perfect. And then, when you actually go hunting, you know that you are prepared.
In our e-mail campaign example, your original test contained too many variables. So you could not say for certain which one was responsible for the positive results you saw with Segment B. Was it the lower price or was it the subject line? Was it the subject line or the fact that you did not include any images? Confused?
The correct way to run your initial campaign is to test just one thing. It could be the subject line (a common test), the price, whether it is in plain text or formatted/HTML. You could then assess the results and try mailing the most successful piece to the segment that had not yet received it. You should also mail your (apparently) least-successful piece to your first “most-successful” group. Note that it is important to maintain the same split of the list when doing further testing so you will be testing against the same e-mail addresses.
Remember that the secret to this kind of testing is to test just one thing.
You can achieve some interesting results with this kind of testing. For example, I know one group of direct-response marketers who did a four-way mailing split, testing four different prices on the same offer: $29, $39, $49, and $79. More sales resulted from the $79 offer than any of the others – which shows you that price-driven marketing may not matter in your market area if your offer is a good one.