Seventh in a series

It was only natural to expect a significant regulatory response to the mortgage meltdown.  After all, it revealed a number of glaring deficiencies in industry underwriting processes, credit standards and product development. 

However, in addressing these problems we have overengineered the regulatory framework with a complex layering of rules that effectively reduces underwriting to a "yes or no" checklist. Compensating factors, which historically have been an effective practice for avoiding underwriting errors, can no longer be considered.  Furthermore, we have failed to account for trade-offs in costs associated with the Dodd-Frank Act’s Qualified Mortgage and Qualified Residential Mortgage standards.

The CFPB's final rule on Qualified Mortgages was the policy "down payment" for a complete revamping of the way mortgages will be originated.  In that rule, borrowers will need to come in to their lender with a debt-to-income ratio no greater than 43%, with some exceptions for GSE and government-eligible mortgages.  The rationale for why the DTI was pegged exactly at 43% and not some other level remains a mystery. 

The last leg of this regulatory overhaul of mortgage underwriting standards will be rolled out by the Federal Reserve and its sister bank regulatory agencies in the form of the Qualified Residential Mortgage Rule. QRM will provide securitizers with an exemption from having to hold a 5% stake in a mortgage security if the loans pass certain criteria.  A proposed QRM rule was floated ahead of the QM standards and was largely panned for its overly restrictive 80% loan-to-value requirement. 

The QM and QRM rules effectively bifurcate the standard underwriting process that allows trade-offs to be made among the three Cs of underwriting: credit, capacity and collateral.  By imposing bright lines for DTI and LTV apart from each other, the new rulebook shuts the door on many otherwise well-qualified borrowers from obtaining credit.  Manual underwriting guidelines and automated underwriting systems permit lenders to trade off a higher risk attribute on one underwriting criterion with another, within limits. Consider a consumer applying for a jumbo loan to finance a home in a pricey neighborhood. This person may have a pristine credit history and a 30% down payment, but with a debt-to-income ratio just over the 43% limit, she's disqualified for QM. Likewise, a borrower with a 780 FICO (sign of a good risk) and a 36% DTI ratio (ditto) seeking a loan for 81% of the home's value (a hair above the QRM ceiling) may find difficulties in obtaining credit, since under the proposed QRM rules this loan would be subject to the risk retention requirements.

Both outcomes create what statisticians refer to as Type 1 errors, or false positives. In this case, we treat loans that fall outside industry norms but are likely to perform well like the mortgage product equivalent of Hester Prynne.  Such outcomes depress demand for housing. They dampen home price appreciation and slow recovery in housing markets. Moreover, borrowers not meeting the QM or QRM standards may face higher mortgage costs where credit is available.  Beyond the economic costs of making Type 1 errors are the social costs associated with delaying homeownership among some borrower segments and societal costs attributed to neighborhood blight in many of our hard-hit communities, for example.

Yet the agencies, by setting high DTI and low LTV thresholds for QM and QRM, respectively, appear to worry more about Type 2 errors, or false negatives – here, loans that pass underwriting but end up defaulting.  The Federal Housing Finance Agency data used to guide regulators in setting the proposed 80% LTV standard for QRM considered only the incremental effect on 90-day plus delinquency rates, and ignored considerations such as the impact on mortgage demand or higher mortgage costs from tighter LTVs. Reducing Type 2 errors comes at the expense of increasing Type 1 errors and vice versa. 

From a policy perspective, achieving a sensible balance between the two errors should be a clear objective of any underwriting bright-line standard.  We may have missed the opportunity to apply such an approach to QM's debt-to-income test, but the QRM loan-to-value threshold could be set while comparing the costs of both Type 1 and Type 2 errors.  Figuring the costs from Type 2 errors is somewhat easier since it entails determining the credit losses under differential LTV rules.  The trickier part of the exercise would be in computing the economic and social costs of any Type 1 errors under different LTV thresholds, but economists are well equipped to make such estimates.  Once the costs associated with both error types are quantified along LTV scenarios, regulators could then determine at what LTV threshold the costs of both errors are minimized. 

Certainly there is an element of judgment even in this process. But standards that fail to balance the costs of both types of policy errors lead to suboptimal policy.

Clifford V. Rossi is the Executive-in-Residence and Tyser Teaching Fellow at the Robert H. Smith School of Business at the University of Maryland.