Online testing

Results Sway Even the Skeptic

The old DM testing axiom claims “Success happens quickly; failure drags on forever.” If a test is going to work you'll know it right away. If not, no amount of hopeful waiting will change it.

Like all rules of thumb though, it depends.

Some time ago we did an online campaign in which we offered a retail gift card as a back end premium to constituents who signed up as monthly donors.

It added excitement. The test message saw a 22% increase in donor response along with a 6% higher average gift for a total of 30% advantage in gross revenue.

What’s more, half of the responders actually opted out of the gift card… saving the cost of fulfillment. Still, on a total cost basis, the initial increase in response did not justify the added expense.

Keep in mind, however, this was an appeal for monthly donors and was based on an assumption (aka: hypothesis) that the program could reach breakeven within 60 to 90 days.

Which did prove to be the case. It made breakeven. Barely. But was it worth the trouble?

We were concerned, for example, that some people might sign up for the gift card and cancel their monthly commitment once they received/redeemed the card.

This wasn’t the case; in the first 90 days 12% of the control (no card) group cancelled their commitment. None of the test responders cancelled.

By the end of the first year, the test group generated 60% more revenue that the control group. Or, a 7:1 return on the investment in the cards. By the end of year two this had grown to an 11:1 return.

I’ll confess: I was skeptical at the outset. I’d like to believe that mission-based responders are better (i.e., longer term, more loyal) supporters than premium responders.

And while that may be true in many cases (e.g., acquisition), it isn’t the case here.

So, are we going to use the offer again?


Even if it did take a while to prove out. After all, isn’t that the whole point of testing?