BANKTHINK
RISK DOCTOR

Models Like VaR Only as Good as Risk Managers' Imaginations

Print
Email
Reprints
Comment
Twitter
LinkedIn
Facebook
Google+

Over the last few weeks, recriminations have been pouring in over the failure of value-at-risk models to identify the excessive risks in the synthetic credit portfolio of JPMorgan Chase's chief investment office.

None of this, by the way, is newsworthy, as issues with VaR models have been around long before the macro-hedge strategy on the credit portfolio was ever a gleam in the eyes of traders in the CIO group.  We can go back to Long-Term Capital Management in 1998 and the financial crisis in 2008, for example, to observe other spectacular instances where sophisticated financial institutions have run aground on misleading results from VaR models.

The irony is that JP Morgan, after all, invented the technique decades ago and then freely disseminated it to the industry, thereby giving rise to the risk management mystique of the company that has prevailed since then.   

Moreover, regulatory bodies and various entities such as the Basel Committee freely promoted the use of VaR in designing regulatory capital requirements which helped cement the technique firmly among risk practitioners. The ability to frame the firm's risk in simple to understand terms, e.g. the worst loss observed over a specified period of time such as one day with, say, a 95% level of confidence, is a risk quant's dream. 

Thus VaR's technical elegance, relative ease of implementation and theoretical foundation spawned a love affair among risk managers, seeking their equivalent to a Grand Unified Theory of risk.

Unfortunately, as much success as the risk profession has had in raising its credibility by leveraging quantitative techniques from various disciplines including the physical sciences, reliance on models such a VaR clearly signal that risk management is fundamentally a social science built on market data that is not well-behaved at times. 

Lurking behind the mathematical elegance of VaR is the dirty little secret that winds up biting even the smartest quants at some time: namely that the models are fed by crucial assumptions regarding the likelihood of different market outcomes. 

In the vast majority of instances, perhaps 99% of the time, the results hold and nothing interesting happens. But markets don't typically just breach a VaR limit. They plow right on through to levels the quants and risk managers never imagined, turning that dream into a nightmare in a second. 

Suppose a bank tells a trader he must limit value at risk to $50 million in 99% of circumstances. He could game this parameter with a strategy that 99.5% of the time can lose $47.5 million but 0.5% of the time exposes the company to a one-day loss of $500 million. The weighted average works out to the limit set by his employer, which is now a lot more exposed than it realizes.  

A false sense of security envelops the business and risk managers who rely on VaR tools because they can singularly provide a consolidated view of risk that is comparable across traders and groups.  For that same reason, VaR remains a mainstay in the risk manager's toolkit and, notwithstanding recent financial blowups, will likely remain in some incarnation for the foreseeable future.

That is not to say that efforts to refine VaR measurement will not proceed apace. We now have regulatory whiplash from the Basel Committee's recommendation of the latest rage among risk quants, namely the expected shortfall method, in place of VaR. But while this metric is designed to more closely approximate bad outcomes in the far end of conceivable market losses, i.e., the "tail risk," it still is vulnerable to errors in assumptions or abrupt shifts in market conditions unforeseen by the data used to construct the estimate.

And so we find ourselves at the brink of another "innovation" in risk management founded on the same flawed bell-shaped curve outcomes as the standard VaR, revered by quants and regulatory bodies alike until the next catastrophe comes along that was undetected by the latest technique. 

This admittedly dark view of modern financial quantitative methods paints a dim view of the plight of risk managers struggling to get their arms around business problems that don't easily conform to nice data patterns.

The message isn't to abandon all hope ye who enter here, but to pragmatically bring good old common sense back into the witches' brew of risk analytics.

Clifford Rossi is an executive-in-residence and Tyser Teaching Fellow at the University of Maryland's Robert H. Smith School of Business. He has held senior risk management and credit positions at Citigroup, Washington Mutual, Countrywide, Freddie Mac and Fannie Mae.

JOIN THE DISCUSSION

SEE MORE IN

RELATED TAGS

10 Ways Technology Will Change Banking in 2015

Leading financial services and tech execs say that in 2015, technology will change many things, from the way they hire personnel to the way they make loans to the way they package products. Upcoming projects ranging from small to big will improve the customer experience at a time when the lines between banks and technology companies continue to blur.

Image: iStock

Comments (0)

Be the first to comment on this post using the section below.

Add Your Comments:
Not Registered?
You must be registered to post a comment. Click here to register.
Already registered? Log in here
Please note you must now log in with your email address and password.
Already a subscriber? Log in here
Please note you must now log in with your email address and password.