Banking has become a complicated business, and working with regulators and other outside parties such as auditors and shareholders makes it even more so. Management teams, especially directors, increasingly need simpler tools such as dashboard reports and guidelines rather than complex methodologies and models.
Financial reporting in particular has become more difficult and inflexible due to the avalanche of regulatory and accounting pronouncements. The one key area where bankers still have some subjectivity is their determination of the allowance for loan and lease losses.
Banks generally prefer a low ALLL because loan-loss provisions are an expense that hurts earnings and capital. Regulators prefer a high ALLL, because it results in a more conservative cushion against loan risk.
To make matters more confusing for public companies, outside auditors and the SEC are concerned that earnings are not being managed with an excessive ALLL. In fact, recently reported bank profits at many big banks are being funded largely by repeated visits to the ALLL cookie jar.
Bottom line: banks can be criticized if their ALLL is either too low or too high.
Despite detailed regulatory and accounting industry pronouncements and policy statements, banks still create their own ALLL methodologies.
Although bankers will disagree, ALLL methodologies, which differ greatly in both quantitative and qualitative risk factors, can often deliver any desired level. This is because bankers can argue they understand their loan risk better than anyone else.
The problem is that there are no accounting, regulatory or industry guidelines for what constitutes an adequate level of ALLL.
The purpose of this article is to share the guidelines I have long used in analyzing banks - recognizing that every accountant, regulator and banker may have a different opinion.
An ALLL adequacy standard requires a qualitative and a quantitative component, namely the most relevant benchmark ratio and then appropriate adequacy levels.
The ideal ALLL benchmark should be a simple and common-sense ratio that is: easy to calculate using existing data; already familiar to and used by the industry and regulators; easy to explain to outsiders, including investors, customers and the media.
In my opinion, the one that best meets these criteria is the familiar reserve coverage ratio, defined by the FDIC as ALLL over noncurrent loans.
While all regulators have calculated the ratio, using a variant in stress tests, the FDIC uses it most extensively in its quarterly reporting.
According to the FDIC, as of March 31 the average reserve coverage ratio (RCR) was 64%, where it has been since June 30, 2009. It dipped to a nearly 20-year low of 58% as of yearend 2009, compared with more than twice that, or 119% on June 30, 2007, before the financial crisis began.
The median reserve coverage ratio as of March 31 of this year was 87%, the same as at mid-2009, but down sharply from the 224% level at mid-2007. The median is much higher than the average because of the large number of small banks with high RCRs.
The RCR is the most common-sense ratio since it compares a good number (ALLL) with a bad one (noncurrent loans). This is exactly why the Texas ratio is so appealing, as it divides a bad number (nonperforming assets) by a good number (capital plus ALLL).
I prefer the reserve coverage ratio to the ratio of ALLL to total loans, because the latter does not distinguish between good and bad loans. Neither ratio is forward-looking in terms of evaluating expected loan losses, but those estimates are normally based on historic losses.























































