What does it take to be excellent? How do you know you are on the right path?
These questions are crucial to customer relationship management. The business model is new, and old measures do not apply.
The new business model says, "People are our most important asset." But in calculations of return on assets, people are an expense.
"We are focused on giving greater value to customers." But what gets measured is market share and products sold.
If credit risk were measured like customer risk, banks would hand over an even amount to each applicant and tell the credit committee: "Many seemed satisfied, and all promised to pay us back. Some may not."
Our clients' goal is to optimize customer relationships, and their performance confirms some logical suppositions.
We now know that the right measure of customer relationship management is the ability to increase the value of individual customer income streams. How many unprofitable customers have become less so? How many profitable customers have become more so? How many with high potential have been converted into high value?
We know that the ability of employees to target customers with potential and devise the right value propositions for them is a powerful leading indicator, as is employee enthusiasm for this task.
We know that the old measures can be dangerously misleading. There are examples of banks that gambled heavily on increasing sales per full-time- equivalent employee and others that emphasized the cross-sell. Having succeeded by great margins, they discovered little corresponding increase in profitability.
Likewise with return on assets, efficiency ratio, and credit quality. If you were to shrink assets, slash sales and service expense, and stop making loans altogether, you might do all right for a while on traditional measures-but probably damage shareholder value.
These measures are authoritative to the degree that the industry has accepted them as reasonable.
Customer relationship management is not unique. No matter what you measure, even when a convincing correlation emerges between actions and results, you must still draw some subjective conclusions.
So how do you make CRM results persuasive in a business environment accustomed to tying success to fundamentally different factors? Do you measure the wrong things precisely? Or do you measure the right things and tolerate imprecision with confidence that you are "directionally correct"- while waiting for precision (or at least consensus) to catch up?
We find the answer in our clients' experience. Their results amount to a series of dots to connect. A compelling picture emerges of companies already increasing customer and employee value.
Year after year, the dots trace a common pattern. Most report about 5% to 15% profit improvement over business as usual. (Many actually do much better, but until the measures are more accepted we are reluctant to flaunt them.)
There are, for example, side-by-side comparisons between a group tested with a CRM process and a matching control group over months and years. First Nationwide (now Cal Fed) reported a 33% increase in household profitability in the test group, composed of 32% higher profits from new customers, 30% lower lost profits from exiting customers, better retention of A customers, and higher profitability of B customers-all relative to the control group in year one.
This is powerful change in the very performance that customer relationship management is intended to address. But the industry lacks a consensus on customer profitability-what to allocate, what to estimate. So these are the right measures, but the units themselves are inescapably imprecise.
Comparison can also be made against industry averages. Sanwa Bank reported an increase in household income well above the industry average in the first year, along with a significant increase in ROA.
Household-income increase is a CRM-specific measure and can be attributed to a new focus on better targets, better tactics, and local market empowerment. Sanwa says it affected the positive ROA trend but knows it does not answer all the questions, including what they might have done without better targets, tactics, and skills.
Some clients focus first on increasing specific product sales to targeted segments. Counting dollar volumes is traditional, easy, and precise for them. The results they report are accurate and touch a common point of reference for the industry. Everybody measures balances and outstandings.
But product sales are incomplete indicators of value; profitability resides more in customer behavior than in the product. So although these bankers are confident that their new process for targets, tactics, and skills is paying off, their proof is oblique.
Fleet Financial Group's 400-plus test markets grew deposits at a faster rate than their 700 control-group markets-a difference of hundreds of millions of dollars. They may not be certain how deposit growth affects profitability, but neither were they certain before, and now they can say with confidence that better targets and tactics increased their sales.
Softer measures of cultural and behavioral change, benchmarking organizational shifts to "customer centricity," are ironically the least convincing for those accustomed to hard financial "facts." But they are the most persuasive for our most successful clients.
They also provide the largest weight of evidence in terms of sheer volume and consistency over the years.
As Abbey National of the United Kingdom put it: "This is our people investment, not a data base investment. By optimizing employee experience, we are taking a big step in optimizing customer experience."
Natwest UK, which collects statistical evidence through employee and management surveys, said it saw "a 26% improvement in line managers who now report that they 'speak, think, and make decisions' in terms of the individual customer's value."
As such measurements take root, we are seeing a controversy playing out between two camps.
One we call "pickers," who prudently pick the winner out of a winner's circle. Once results are in-tangible, traditional, hard-dollar, widely accepted measures-pickers are confident they have done the right things. Top 10 in ROA? Best efficiency ratio? Head of the class. Done.
The other group, "projectors," detect potential in current performance. They see data points and use them to sketch the big picture. They prize leading indicators and track them even more assiduously than end results, because they know leading indicators give them levers to pull. A common language? We're on to something here, so let's accelerate it. New appetite for calling customers? Great, now let's look at the best callers and see what they have in common.
Until the newer customer relationship management measures become familiar and refined, close approximations are better than none, imprecise is better than wrong, oblique views are better than blind spots.
Robert McNamara, a former president of Ford Motor Co., secretary of defense, and head of the World Bank, understood the propensities. He said: "The first step is to measure whatever can be easily measured. That is O.K. as far as it goes. The second step is to disregard that which can't easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading.
"The third step is to presume that what can't be measured easily isn't important. This is blindness. The fourth step is to say that what can't easily be measured really doesn't exist. This is suicide."
It is in the ability to get early feedback from leading indicators-to learn from incremental change, to spot best practices as they emerge, to pull the various levers when they can still have an impact-that true competitive advantage is born.
As the statistician John Tukey advised: "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise." Mr. Hall is chief executive officer of ActionSystems Inc., Dallas.