Transactional data permit companies to segment their customer base into best, worst, and average customers. This permits much more focused marketing expenditures.
But it doesn't provide any clues as to who your customers are as individuals. How old are they? What are their income and family status? Do they own homes or cars? What are their lifestyle preferences?
Knowing more about your best customers can help do two things:
*First, relate to them more effectively in your communications.
*Second, find other customers like them.
In order to gain this knowledge, you must either collect information directly from your customers - often a costly proposition - or purchase data from outside sources.
When obtaining external data to overlay on your file, an immediate question is which data elements to use. The answer will depend on the business problem you're trying to solve.
The key question is, "How will this data element enhance the value of the names to which it is appended?" As often as not, an individual data element has less value by itself than it does in relation to other elements.
Let's say you're offering second mortgages for parents facing college tuition. If you look for all heads of households in the likely age bracket (40 to 55), you may get a lot of households that don't have college-bound kids, or that don't have enough income to afford college no matter what. But if you look for households with incomes of $40,000 and up, head of household age 40 to 55, spouse age 40 to 55, and oldest child in teenage years, you're likely to get the group you want. You'll also want to know whether the household has enough available equity to support a second mortgage.
Lots of companies use transaction data, demographic data, and lifestyle data to create business strategies. Through the years, these practitioners have amassed a remarkable wealth of knowledge about their marketplaces, about how to use data for maximum market penetration, and how to maximize the lifetime value of individual customers.
These experienced data users know what others are still learning - when it comes to this sort of information, you have to recognize its limitations and use it for what it can do for you. The glass is never more than half full, because data are almost never, ever perfect, particularly when they have been compiled by outside sources. Errors creep in through inaccurate reporting by consumers, or during data entry and compilation. Sometimes a piece of information is not provided by a consumer, forcing data vendors to estimate the correct information with computer models. And, when you're trying to overlay informative data onto your own customer file, you're never going to get information on every single name in your file.
These limitations, however, seldom create impassable barriers. One simply needs to understand the three factors that determine quality: precision, breadth, and depth.
An understanding of the construction of each data element will help you determine whether its precision will be sufficient for your purposes.
The keys to precision are:
Reliability. Did the compiler measure the behavior characteristic the same way each time data were captured? When one compiler estimates income using one method, will it generate equivalent results to another compiler that employs another method or data source?
Most experienced practitioners find that precision does not impact the predictive power of the data, as long as precision is explainable. If compiled in a consistent way, for example, a data element that is represented as a range can still be tremendously predictive, even if a survey of customers shows that it is off in the particulars. Most compiler models help to adjust for inaccuracies or omissions in the data. And a multisourced file increases the likelihood of an accurate reading on any given element.
Validity. Is what was measured what was supposed to be measured? For instance, does an estimated income truly measure a person's before-tax income, or is it more reflective of after-tax income or disposable income?
Breadth. Some marketing programs require breadth of data. Breadth describes the coverage you can get on an element, also known as match rates. Some compilers have enormous breadth of coverage, but offer only a few data elements for each individual name or household. Other compilers may have less depth, but very good coverage across those elements.
Depth. Some applications require depth of coverage, which describes how many data you can cover on an individual name. Some compilers have great depth of data, so you can learn a great deal about each individual name or household on your file. Depth varies by data element and by source. Multisourced information, where multiple files are combined, presents a strong opportunity to increase data depth.
It is important to consider all three dimensions of data quality simultaneously when you're preparing to overlay your file. Good data enhancement cannot occur if you have very high precision but poor depth and breadth. Likewise, a high match rate may be misleading if depth or precision are low. The goal is to maximize all three. However, this seldom can occur. One must typically deal with trade-offs between them.
Mr. Hinman is a business development executive at Acxiom Corp. in Conway, Ark.