How to Prove the Value of Your Database

Posted on by Chief Marketer Staff

If you’ve been in direct marketing for more than a few months, you already know the basics of setting up test cells that will deliver statistically projectable results measurement. However, you should consider creating an enterprise-wide control group. This type of control group differs from—and does not interfere in any way with—the A/B split testing typically employed to test and measure one creative or communications approach against another. Instead, its purpose is to provide an ongoing measurement of the value of your company’s database marketing effort globally.

Why would any organization need such a measure? Well, after the first year, senior executives will want to know who the new program worked. Furthermore, two or three years down the road, that highly supportive CEO, CFO, or CMO that you rely on for funding might not be with the organization. The answer lies in proof—in the ability to demonstrate the overall value of your database marketing program to original supporters or, maybe, to a newly hired (and highly skeptical) brand-oriented replacement. That means real numbers, not anecdotes.

The basic idea is to isolate certain customers so that they do not receive any of the promotions or continuity communications that collectively make up the organization’s database marketing program. Comparing those customers over time to the ones who do receive those promotions and communications will provide an incremental measurement of the overall value of the effort. This important tool will prove the validity of the decision to invest in database marketing technologies and processes. Perhaps even more important, it will provide trend measurement. As your database marketing program matures, there should be an increasing divergence in customer value between the control group and the rest of the customer universe. Stated simply, it will allow you to answer the question, “What is the company getting for all the money we’ve invested?”

The calculation for answering that question is fairly simply. Whatever measures are used, the average values in the control group should be compared to the same average values in the rest of the universe. Since the only variable is the presence of database marketing activity, the incremental difference between the two is the gain or loss that is a direct result of that activity.

A word of caution, however: Because the control group will be scaled for relatively gross measurements, you should never attempt to use it to measure more granular detail, such as creative testing market-to-market comparisons, etc. The presence of the enterprise-wide control group absolutely, positively, does not eliminate the necessity for testing scenarios and setting up test cells to properly measure them. The control group measurement, as well as the control group itself, should be a “background” methodology, one that goes on indefinitely and has zero effect on your other planning. Just learn to think of your marketable universe as being slightly smaller than it actually is. The best way to implement this is to make the control group flag a global exclusion in your database processing, so there is no chance that you will ever contaminate the universe and thus reduce its viability as a statistically reliable measure.

Enterprise control group measurement should be limited to high-order dimensions of data such as:

*Average dollar value of the customer (revenues).

*Average number of relationships (cross-selling).

*Average length of time eon the books (tenure).

*Average retention or attrition (churn).

In all cases, the entire control group universe should be compared to the entire non-control group universe of customers and never broken down into any finer “slices” of the data. This should be a monthly or quarterly report, with measurements both for the specific month the report was issued plus “rolling” 12-month averages.

Customers must be selected for inclusion in the control group randomly. In most cases, this can be accomplished by simply flagging every “nth” customer in the file as a control group customer, then using the same nth method to add new customers to the control group with each update of the file. This will constantly refresh the control group to keep it homogenous with the rest of the customer universe. Care should be taken, however, to make certain that whatever method is used, the sample is random. Nth-ing the file might not be random if the incoming data is ordered in some fashion.

For a file of two to three million customers I would recommend setting the sampling routine so that it will create and maintain a control group of at least 50,000 customers. For larger files, 1% of the universe or less will usually suffice. While this number will probably prove to be larger than required for statistical validity, it is best to err on the side of volume, at least in the beginning. Initial measures need to be “unassailable.” With experience, the size of the control group might be reduced.

And, finally, it is very important that the methodology for establishing and maintaining the enterprise-wide control group—and perhaps even knowledge of the existence of the control group itself—is not shared with the sales channel. If a salesperson knows who is in the control group and decides to take some special action to sell this previously untouched universe, the ability to deliver a true incremental measurement will be lost.

If you implement an enterprise-wide control group, you’ll always be able to answer questions about the value of what you do. And it will be provide useful, ongoing data that will help justify budgets when that inevitable crunch times comes to your company.

Sooner or later, though, it will probably dawn on someone in the organization that your control group represents a lost opportunity cost to the enterprise. The refrain usually goes something like, “Some of those customers you’re ignoring are just like the ones that buy a lot from us. We need sales and you don’t need to prove the value of database marketing. We get it! Let’s unplug it!” If that happens to you and you’re unable to sell the idea that you need a long-term measurement methodology, you need someone to do an intervention on your behalf. This is important.

One final note: People who follow this advice might occasionally experience one unintended consequence of the methodology: isolated adverse customer reaction. If you have two similar, very good customers who happen to be very close friends and only one of them gets an offer that is creatively positioned as “a special benefit” for our best customers,” that might come up in a conversation between the two of them. If it does, the control group customer who did not get the right offer might feel snubbed and call to complain. It doesn’t happen often, but if it does, apologize profusely, explain that there must have been some mistake, extend the same offer to the caller, and, if possible, sweeten the offer with an extra discount or other benefit to show them that you really care about their business. Then recode them immediately so they’re no longer in the enterprise-wide control group. You want to protect the statistical integrity of the universe, but not at the expense of losing a good customer. You’re not guarding state secrets. You’re just measuring a marketing program, and re-cording a handful of control group customers over the course of a year won’t have any effect on the overall measurement.

This piece is excerpted from “The Business of Database Marketing,” by Richard N. Tooker. It will be published in October by Racom Communications (www.Racombooks.com).

More

Related Posts

Chief Marketer Videos

by Chief Marketer Staff

In our latest Marketers on Fire LinkedIn Live, Anywhere Real Estate CMO Esther-Mireya Tejeda discusses consumer targeting strategies, the evolution of the CMO role and advice for aspiring C-suite marketers.

	
        

Call for entries now open

Pro
Awards 2023

Click here to view the 2023 Winners
	
        

2023 LIST ANNOUNCED

CM 200

 

Click here to view the 2023 winners!