BANKTHINK
RISK DOCTOR

Banking Needs to Get Over Infatuation with Risk Models

Print
Email
Reprints
Comments (4)
Twitter
LinkedIn
Facebook
Google+

In the years leading up to the financial crisis the risk management profession underwent a significant transformation. 

Advances in computing technology expanded our ability to mine massive databases, which coincided with an ability to develop and deploy an array of increasingly complex analytical models across the enterprise.  What had been a profession where senior leadership typically came from the ranks of commercial underwriting shifted over time toward individuals with sophisticated, quantitative skills. 

While analytics remain an essential part of any risk management organization, in many cases it has given organizations a false sense of security that risk can be distilled to a quantifiable set of outcomes.  Unfortunately this has come at times at the expense of sound judgment and experience, reasoned intuition and simple common sense in managing risk. 

Nowhere was this more clearly evident than during the housing boom, where complex statistical models became integral parts of the financial engineering, underwriting and collateral valuation process.  The advent of such automated tools enabled the industry to take greater (and eventually excessive) risk, based in part on a mistaken belief that such models were reliable and accurate over an increasingly risky set of mortgage products and alternative economic environments to which they were not well calibrated.  Other models with such incomprehensible names as the Gaussian copula wound up being used extensively to price credit default swaps but proved fatally flawed as market conditions changed.  Reinforcing and distorting this view was an unusually benign economic environment. Eventually the model myopia that crept in during these years to varying degrees usurped more qualitative risk management practices that relied on actually reviewing a borrower's loan file before making a credit decision.  

With the benefit of hindsight and the losses to show for it, it is easy see how lending $500,000 to someone buying a $525,000 home without verifying income and employment or conducting an actual appraisal was a bomb ready to explode under the right conditions.  Perhaps greater reliance on reports from quality control units performing post-origination audits of risky loans might have given the industry greater pause. But this information was often discounted in the face of modeled results showing that the risk of these products was reasonably quantifiable.

Incredibly, one would think that the vast self-inflicted wound the industry suffered in 2008-2009 would leave an indelible mark on those charged with managing bank risk and regulating these institutions.  Certainly there has been greater awareness of model risk since the crisis and it has become part of the accepted lexicon among senior management. 

This year's nominee for model most likely to be cited by a bank CEO was the value-at-risk model.  When news broke over the ineffectiveness of JP Morgan Chase's VaR model to assess the risk of the Chief Investment Office's "macro hedge" of credit default swaps, it finally brought to life the arcane world of mathematical representations of the real world.  And we found out once again no matter how hard we strive to explain human behavior with advanced analytics, understanding real life is ultimately a social and not a physical science.

The risk profession continues to aid and abet our tendency to want to quantify everything.  Risk managers are paid in part to help their organizations quantify uncertainty where outcomes are evaluated on their likelihood of occurrence.  Distilling a trading group's worst loss over a 24-hour period with 95% confidence to a single number as provided by a VaR model has a certain simplicity and mathematical elegance that is both profound and at times misleading.  Professional associations seizing on the need for greater analytical firepower to support the care and feeding of such models  offer risk management certification programs that are focused almost exclusively on demonstrating analytical prowess.  And the federal regulatory agencies have jumped on this analytical bandwagon, with the Federal Reserve's stress test program for the largest bank holding companies and the continued evolution of the Basel capital standards.

Attempts to bring analytical rigor to measuring operational risk, for example, illustrate the futility and abdication of common sense to assessing risks that are not conducive to strict quantification. 

This does not mean that we should throw up our hands and abandon the quest for better analytic methods.  Models will always have an important role to play in managing risk. However, these should be meant as informative guideposts rather than as absolutes.  The sooner we realize that effective risk management is founded on strong processes and controls augmented by analytics the sooner we achieve a healthier balance for managing risk.

Clifford V. Rossi is the Executive-in-Residence and Tyser Teaching Fellow at the Robert H. Smith School of Business at the University of Maryland. 

JOIN THE DISCUSSION

(4) Comments

SEE MORE IN

RELATED TAGS

Legal Bills Pile Up at Banks
Each quarter banks report their worst-case estimates of costs tied to lawsuits and regulatory probes. Some banks reported lower figures in recent quarters, but others are braced to spend more to resolve legacy issues. New legal threats loom, too.

(Image: Fotolia)

Comments (4)
Excellent article! Thank you!
Posted by frankarauscher | Thursday, November 29 2012 at 10:37AM ET
It is a fact that complex models are always misleading particularly when one model throws a result right opposite to the other one. With experience in a traditional banking set up in India I feel the third sence and the credit officer's judgement of credit profile should be given utmost importance in taking decisions.
Posted by krishnamachari sampath kumar | Thursday, November 29 2012 at 10:42AM ET
Complex modelling can take you so far, no model is a 100% accurate, but if the model is accurate 95% of the times, it does make sense for analysts to use them, the builders of the model need to put in the due diligence to ensure as much accuracy as possible.
There is a new tool recently launched for financial institutions and credit unions to help credit managers do their loan pre screening by running a few financials of the business from the tax return and it predicts default. Business Credit Report
Posted by Sanaa | Friday, November 30 2012 at 8:17AM ET
Couldn't agree more. As I've posted here many times, the financial services industry has become delusional in thinking that more complex, farther encompassing modeling and quantitative analysis, makes for better risk management. In today's new world only PHD mathmaticians can understand the algorithms behind the models and the business lines are left scratching their heads in trying to decifer the output results for reasonableness. The most reliable risk management technique is having experienced business people, with an appropriate level of risk aversion, who know their customers and counter parties' businesses as well as the other side does.
Posted by SEG NSFP | Monday, December 03 2012 at 8:09AM ET
Add Your Comments:
Not Registered?
You must be registered to post a comment. Click here to register.
Already registered? Log in here
Please note you must now log in with your email address and password.
Already a subscriber? Log in here
Please note you must now log in with your email address and password.