Comment: Value-at-Risk Modeling: Simpler Approach Needed

The value at risk concept has taken the risk management world by storm. As practitioners and regulators worldwide embrace it as a bona fide risk measurement tool, however, its greatest asset, the ability to summarize disparate concepts of risk throughout an organization into one uniform number, gives rise to potential inadequacies which, if not understood and accounted for, may lead to serious risk measurement oversights.

A simple definition of value at risk is the maximum expected "worst case" loss for a given confidence level over a defined period of time. A typical application of the concept would be for a risk manager to report there is a 1% probability that more than X dollars will be lost over a 10- day period, given the current balance sheet composition.

J.P. Morgan & Co. and the Bank for International Settlements have published extensively on the "standard model" approach to value at risk. This standard model approach makes a number of assumptions in calculating value at risk.

We know that all of these assumptions are very rough approximations of reality. As a result, the assumptions behind the standard model approach give many market participants cause for concern. Let's examine them one by one.

There is no optionality of any kind in the portfolio being measured.

For any financial institution's balance sheet or trading position, a very significant portion of transactions involve options. Foreign exchange options, swaptions, caps, and floors have become common products in most major currencies.

Almost every retail banking product allows the retail customer to exercise some option: the option to prepay when the customer is a borrower, and the opportunity to withdraw funds early when the customer is a depositor.

In the insurance industry, every whole-life policy incorporates the right to terminate the policy. In the investment management business, retail customers have the right to withdraw funds at any time. There are hundreds of callable bonds issued by U.S. government agencies, and the U.S. mortgage-backed securities market continues to rank as one of the largest pools of capital in the world.

Clearly, a better approach is needed.

The market value of every security in the portfolio is normally distributed.

Assuming the value of a security is normally distributed is unrealistic because it means there is a theoretical possibility that the value of the security can be negative. This must be distinguished from the commonly used assumption in finance that the rate of return of the security is normally distributed, which means the value of the security is lognormally distributed and thus can never be negative.

In reality, most securities have values which are not negative; the value of a callable U.S. government bond is never less than zero, never more than par, and usually just less than par.

The historical price volatility and correlation between the values of each instrument being measured, over the period deemed "best" by the user, are accurate forecasts of the future.

We know that the volatility of a bond's price, all other things being equal, declines toward zero as the time to maturity approaches zero. Nevertheless, if the value at risk calculation is being done with a short time horizon, and if the security's current maturity is close to the historically calculated "on the run" maturity, this assumption may be acceptable.

Cash flows paid and received between the calculation date and the future date for which value at risk is calculated are not random and therefore don't affect the calculation.

Whether it's net income simulation, value at risk, or dynamic market valuation, this problem is often more significant than many people realize. If the time horizon on value at risk is short, it's a minor problem. If one is looking one year or so into the future, it can be a serious source of complication.

For example, what if one holds a mortgage-backed security in the portfolio today? It could prepay on any of the 365 days between now and the valuation date, and the reinvestment rates and opportunities on each of those days will be different. Given this uncertainty, it is important to make reinvestment assumptions that minimize the impact on the measurement of value at risk when using the standard approach.

A more sophisticated value at risk modeling approach is needed to eliminate these deficiencies and create a more realistic measure of value at risk. The best approach applies the latest breakthrough in the valuation of complex securities: term structure model methodology.

Rather than relying on simple assumptions about the probability distribution of future security values using the standard value at risk approach, the term structure model approach is a full multicurrency, option-adjusted valuation technique based on practical and realistic assumptions about the probability distribution (not the level) of future interest rates.

The power of the term structure model approach to measure and manage risk is the biggest advance of risk management in the 1990s, and is currently the standard method for valuing caps, floors, swaptions, and other interest rate derivatives on Wall Street. Third- party-vendor software to value an entire bank balance sheet and measure value at risk using this approach is readily available now.

Mr. van Deventer is president and Mr. Levin is director of U.S. operations for the Kamakura Corp., a New York risk management consulting and software development company.

For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER