Pity the poor credit card company. Not long ago, profits from the ones owned by banks were propping up their grateful parents, and the monolines- First USA, Capital One Financial Corp., Advanta Corp.-were the darlings of Wall Street. The concept of leveraging data to make smart decisions- pioneered by card companies and taken to a new level by monolines-drew admiration and envy in other parts of retail banking.

Then Bank of New York startled the banking community with its June 1996 special loan-loss provision of $350 million. Other issuers followed suit. In March 1997, Advanta reported a $20 million loss in the first quarter and a chargeoff rate of over 7%. This loss was the first in the company's history; Advanta's card business has since been sold to Fleet Financial Group.

The industry's slide during 1996 and 1997 can clearly be seen in progressively higher chargeoff rates. (See table.) Delinquencies and losses at most large bank-owned card operations got worse, and the monolines-which traditionally enjoyed much lower than average default rates-began seeing loss rates of 6% and higher. Industry delinquencies hit record levels in 1996, presaging even higher chargeoffs in 1997; all this happened in an economy far from recession.

Is there really a problem here? And is the use of newfangled statistical models-rather than seasoned, human judgment-to blame?

There are several reasons for the deterioration.

First, the industry center of gravity has shifted. Higher-income households have reduced their consumer debt in real terms, and lower-income households (under $25,000 a year) have increased theirs. This shift in customer mix has lowered average credit quality, but raised the average balance per card. In part, chargeoff rates reflect this shift.

Second, marketing is way up, with many households receiving dozens of "pre-approved" solicitations a year. Inevitably, some customers have given in to the temptation of taking two to three lines of credit. Selling pressure may be responsible for some consumers' slide into excess indebtedness.

Third, competition is driving pricing lower: The average annual percentage rate is generally sliding, despite the downward shift in average credit quality. This reduces banks' revenue per dollar of balances, but the lower rates are not enough to ease customers' total cost of being in debt appreciably. Meanwhile, fees (other than annual fees) have been sharply raised.

Fourth, there has been a sea change in consumers' willingness to declare bankruptcy. Liberalization of bankruptcy law and other changes have led to a huge rise in unpredictable chargeoffs.

Finally, some issuers have come enthusiastically, but late, to the modeling wars. They are deploying horse cavalries where others are deploying tanks. Some issuers are using relatively primitive statistical models and decision-support processes to support very large-scale initiatives. Though these models might once have yielded acceptable aggregate results, the combination of changes in the marketplace and superior weapons in competitors' modeling arsenals is ensuring these initiatives will substantially worsen portfolio performance.

Picked apart, the changes that contribute to today's situation are less alarming and more manageable than has been feared. We cannot yet say how far the changes in consumer behavior will go, but we can be sure they will restabilize at some new-and possibly permanently higher-level.

The role of statistical modeling for decision support is a major issue: Have the events of the last year uncovered a fatal flaw in this new methodology?

A lot more hangs on the answer to this question than whether a few card industry executives lose their jobs. Throughout retail banking, a huge wave of investment in data warehouses and data mining technology is going on, predicated on the belief that card industry decision-support techniques can be brought to bear in many other areas.

The use of statistical modeling in the card business goes back many years. Scorecards that ranked customers-or prospects-based on the odds of default came into widespread use in the 1970s. A particular score was used as a "cutoff" when considering a specific action. Similar scores have been created recently to rank consumers by behavior, like their propensity to carry a balance or to stop using their card.

In each case, the prediction assumed that selected factors that had shown to be historically predictive would prove similarly predictive in the immediate future. Experience shows again and again that predictive models of human behavior are reliable guides.

Experience also shows that the quantitative methods supporting such predictions are invariably more consistent and accurate than human judgment. Indeed, these predictive models can be so good that in a stable market they come to be relied on not just as decision-support tools, but as decision-making tools. And this, of course, can become a major flaw.

If the market environment is not stable, then the predictive models will temporarily give inaccurate results. There is no evidence that any other method will give a better result: Even at the end of 1996, the models that predicted default may have underestimated the level of default behavior, but they still correctly ranked from worst to best.

In at least one prominent case of a card issuer getting into trouble, the problem was not the decision-support models, but the fact that the marketing area overrode those models to fulfill department goals for approval rates.

The answer to the apparent dilemma does not lie in abandoning quantitative methods, even temporarily. Rather, firms should aim to develop more comprehensive predictive models. The state of bankruptcy declarations in the United States is already yielding new models that improve on the well-established ability to predict more "traditional" patterns of delinquency and default.

Firms should also change the way they manage decision-support tools, taking advantage of the higher and lower confidence levels the software assigns. For example, suppose a decision-support model examines each active customer account and computes the answer to the question, "Which customers would become more profitable if I offered them a balance-transfer solicitation with a six-month introductory rate of 5.9% on the transferred balances?"

In principle, the model can compute the answer for millions of cards and present the decision-maker with a fait accompli. It can separate customers into two sets - people who would rise in value, others who would fall.

In practice, the first set, marked for action, can be examined further. Some would rise in value a lot, some only a little. For some, the rise in value would depend most critically on the accuracy of prediction of one particular behavior; for others, it would depend on the accurate prediction of another behavior.

Examining these factors can help card issuers adjust their strategies. When trust in the model is high, you come closest to accepting its advice to act in all cases where value gain is expected to be positive. When market conditions are changing and the correspondence between predicted and actual behaviors slips, you follow the advice of the model only after making precautionary adjustments.

With better predictive models, or as the state of the market stabilizes, the decision-support process can once again rely more completely on the underlying models, now updated to reflect the changed conditions.

People should think twice before viewing the current difficulties among credit card issuers who rely on statistical models as a deserved case of hubris. These difficulties are simply a stage in the evolution of a powerful new way of doing business-they are not a sign of its imminent demise.

Subscribe Now

Access to authoritative analysis and perspective and our data-driven report series.

14-Day Free Trial

No credit card required. Complete access to articles, breaking news and industry data.