The Fed issued two proposals last week that would make significant changes to its bank capital rules — one that would integrate the CCAR process into banks’ ongoing capital requirements by translating it into a stress capital buffer, or SCB, and another that would replace the current, one-size-fits-all enhanced supplementary leverage ratio, or eSLR, with one that varies by firm. While most attention has been focused on the mechanics of the SCB and the calibration of the eSLR, less noticed has been an implicit and worrisome trend that underlies both: If enacted, they would double down on a G-SIB surcharge framework that is clearly outdated and fundamentally flawed.
The G-SIB surcharge itself is not new. It was first formulated by the Basel Committee in 2013 and implemented by the U.S. in heavily “gold-plated”— that is, more stringent — fashion in 2016. Currently, the surcharge is an additional 1.5-3.5% of risk-weighted assets that G-SIBs must maintain on top of their general risk-based capital requirements and buffers.
The Fed’s recent proposals would expand both the scope and significance of that G-SIB surcharge in two key ways. First, by combining the current CCAR process with banks’ ongoing capital requirements, the Fed would, for the first time, create a capital requirement that included them both. That is, U.S. G-SIBs would be required to meet, on an ongoing basis, their ongoing minimum capital requirements plus their stressed capital requirement plus their U.S. G-SIB surcharge. Second, the Fed has proposed to replace the current, uniform eSLR requirement with one that varies by firm, which would be set for each at half of an institution’s U.S. G-SIB surcharge.
If finalized, these proposals would significantly expand the importance of the G-SIB surcharge across the U.S. capital framework. That’s a problem, because the current G-SIB surcharge framework currently suffers from fundamental conceptual and empirical flaws.
In theory, the basic idea of a G-SIB surcharge seems appealing and sensible: Since systemically important banks can pose risks to the broader financial system if they fail, they should hold extra capital to reduce the likelihood of such failures. And the more costs they would impose on the financial system upon failure, the higher the capital “tax” they should pay.
Unfortunately, in translating this reasonable idea into practice, the Basel Committee and especially the Fed have put in place a methodology that is untethered to any hard, empirical data — and clearly ignores those factors that now would appear to be most relevant to a firm’s actual systemic risk.
First, consider how the G-SIB surcharge framework measures the first key variable in question — the losses a firm would impose on the broader financial system were it to fail. Here, the Basel Committee and Fed have undertaken no meaningful empirical analysis or data work to quantify these systemic costs and assess how we might estimate them by individual firm. Instead, as an arbitrary proxy, they invented a five-factor “systemic risk indicator” score, which uses measures like size, interconnectedness and complexity, each with its own subcomponents and relative weightings. Again, there was no attempt to empirically assess or validate whether these components (or their relative weightings) actually reflected the relative costs that a firm’s failure would impose on the financial system. Instead, regulators simply chose assets and exposures that they assumed were related to systemic risk, and then multiplied those amounts by an arbitrary coefficient to translate them into a capital charge.
The inaccuracy of their chosen proxy speaks for itself. In the years since banks’ systemic risk indicator scores were first measured, regulators have enacted a whole host of regulatory changes expressly intended to reduce the costs that a G-SIB would impose on the financial system if it were to fail. These requirements include living wills, minimum total loss absorbing capacity standards, adherence to a protocol that ends destabilizing derivative closeouts upon failure, a whole host of liquidity requirements, derivatives margin requirements and more.
Yet crucially, the significant progress made via these measures in reducing the impact of a G-SIB’s failure on the broader financial system is reflected not at all in any G-SIB’s systemic risk indicator score. It is hard to understand a how a methodology that makes no distinction between a firm with a large buffer of bail-in debt, a credible plan for its own orderly resolution and a contractual commitment preventing the closeout of derivatives positions, and one with none of those safeguards, could possibly be fit for purpose. But that’s what Basel and the Fed have allowed the G-SIB surcharge methodology to become.
As it turns out, that five-factor methodology is in the process of getting worse, not better. The Basel Committee has proposed revising one of those factors — substitutability (i.e., the lack of readily available substitutes for the services that a G-SIB provides) — by removing the cap that currently applies to it. The Basel Committee has not offered any evidence that removing the cap would make the measure more accurate, and it doesn’t appear there is any. As my colleague Francisco Covas has shown, making this change would reduce the economic and statistical significance of the substitutability category as a measure of systemic risk, not improve it. It would also have a primary and disproportionate impact on four specific G-SIBs, all of them based in the U.S. Put another way, the existing methodology dramatically overstates the systemic risk posed by lack of substitutability, so the cap is actually an entirely justifiable downward calibration.
A second major flaw is uniquely local: When implementing the G-SIB surcharge in the U.S., the Fed chose to effectively double the Basel Committee’s G-SIB surcharge amounts. Yet it offered only the weakest of support for doing so; the only justification the Fed provided for its doubling of the surcharge was a study — issued only after it finalized its rule, in violation of the Administrative Procedure Act — that purported to show that such higher amounts were needed based on historical losses among large U.S. banks. The problem with that study is that its sample included a wide range of banks that aren’t remotely comparable in size, business model or risk profile to modern G-SIBs — namely, the top 50 American banks each year, going back to 1986. Thus, the results are driven not just by the historical loss history of large U.S. G-SIBs, but by a wide range of banks of very different types and sizes.
Consider, for example, First City Bancorp. of Texas, which was an $11.2 billion bank by assets that failed disastrously in the late 1980s because of its concentrated exposure to energy and agricultural markets in the southwest. While that history does not seem particularly relevant to any estimate of potential loss experience for a G-SIB, its inclusion by the Fed in its data set exerts a meaningful influence on the Fed’s results and produces higher surcharges. As our research has shown, rerunning the Fed’s analysis using a more reasonable data set supports G-SIB surcharge amounts that are equal to or lower than those established by the Basel Committee, not higher.
These deep flaws in the U.S. framework were problem enough when the G-SIB surcharge applied only to firms’ ongoing capital requirements, but they will become greatly magnified and exacerbated if the surcharge takes the even more prominent role envisioned in the Fed’s recent proposals. Should the Fed move forward, the need for it to revisit and refine the U.S. G-SIB surcharge methodology into a policy that is both conceptually coherent and empirically sound will be stronger and more urgent than ever.