BankThink

It's Okay for Stress-Tested Banks to Talk Back in School

If an examiner is displeased with some aspect of a bank's operation, but the criticism does not rise to the level of an enforcement action or failing a stress test, they will issue a "matter requiring attention," or MRA. The finding can have a huge impact, particularly on market capitalization if the MRA is public.

But an MRA should not be taken as gospel. Just as the validity of a scientific theory is tested by attempts to disprove it, banks should feel empowered to challenge MRAs since examiners are not always right.

Potential for fallout from an MRA resulting from a Federal Reserve-managed stress test is particularly notable. One MRA does not mean a stress test failure. But it likely will if the Fed finds the same perceived flaw more than once. Therefore, MRAs often send banks into a frenzy to try and appease. But it doesn't have to be that way in every case. While most MRAs related to modeling are bold and insightful, some are irrelevant and a few are demonstrably wrong.

Regardless of the MRAs' quality, banks' responses to the findings vary. We have seen banks follow, to the letter, the dictate of a particular MRA, only to find that the solution generates an equal but opposite wrist-slap the very next year. Some banks find themselves forced to throw out entire model development programs on the basis of a handful of MRAs, only to find that the new models attract similar numbers of Fed missives, albeit of a slightly different tone. In these cases, sometimes, banks are interpreting advice about deckchair placement as a directive to scuttle the ship. Needless to say, this is a very expensive way to sail.

Call me starry-eyed but I think stress testing practices should be driven by scientific principles. MRAs should not be treated as papal bulls requiring absolute adherence or excommunication. Rather banks should have the confidence to treat MRAs exactly as they are named — as matters requiring attention — and respond according to sound scientific reasoning.

The central tenet of this approach is "falsifiability." Let's say an examiner has found that some element of a bank's stress testing model is suboptimal and that an alternative approach is preferable when applied to the bank's data. The initial response to this should be to vigorously attempt to reject the Fed's hypothesis by disproving that the model is suboptimal.

There's nothing personal going on here. It's just the scientific thing to do.

If the results of the bank's data modeling hold, the "suggestions" in the MRA should be rejected in favor of the bank's pre-existing approach. Of course, if these vigorous challenges to the Fed's authority are unsuccessful, the actions implied by the MRA will have to be adopted until these, too, are inevitably found wanting by future stress test practitioners. All of this activity should, naturally, be carefully documented and validated by an independent team in the bank.

So how should the Fed examiners respond to a bank after the MRA, issued with the upmost care and professionalism, has been rejected? A true scientist-examiner will carefully assess and debate the evidence in the bank's submission and be willing to adjust the finding if the evidence is compelling. If nothing else, the examiner should respect the task given to the bank's modelers and not be affronted by the fact that his or her views are being challenged. The examiner should welcome any respectful challenge in the name of scientific inquiry.

And just like with any normal dialogue within the scientific and economic disciplines — where it is usually acceptable for two plausible competing theories to both be left unrefuted — such disagreements between two sides do not necessarily have to produce a clear answer. If the empirical findings from two different approaches are similar, where one approach does not statistically dominate the other, it may be acceptable to all parties simply to agree to disagree.

Ultimately, science can be discarded in favor of blind authority if the examiner sticks to his or her guns. The Fed, after all, is the one tasked with judging the validity of the bank's stress test model, not the other way around. But just like in Monty Python, no one should expect the Spanish Inquisition. Bank examiners, in some ways, have the power of inquisitors, in that their qualitative assessments of bank stress testing practices have no obvious earthly avenue for appeal. But given that modeling is inherently contentious, it is critical that Fed conduct investigations with humility and reason and be willing to change its view in light of the evidence.

Tony Hughes is managing director of credit analytics at Moody's Analytics, where he manages the company's credit analysis consulting projects for global lending institutions.

For reprint and licensing requests for this article, click here.
Law and regulation Exams Compliance SIFIs
MORE FROM AMERICAN BANKER