The Dodd-Frank financial reform law is premised on the belief that markets are imperfect and subject to costly and dangerous errors, as in the recent credit crisis. Consequently, it needed to correct those errors. Much of the law's implementation is delegated to regulators, but their rule proposals can only be as good as the regulators themselves.
The law fails to recognize that, though markets are indeed imperfect, so are regulators. Behavioral economics highlights that both are subject to biases leading to imperfect decisions. This form of economics applies psychology to financial decision-making. It is frequently used to justify regulations needed to protect investors and consumers from their biases. The flip side to this effort is that regulators are subject to the same behavioral limitations. Hence, the question becomes, who protects us from the regulators?
Facts come to us through the veil of human emotion. People, investors and regulators, are fallible and prone to peer pressure and psychological bias. Failure to recognize this fact underlies the intellectual error of believing regulators can protect investors and consumers from market crises. Recent examples of this sort of failure include the Gulf oil spill, Toyota recall and 2008 financial meltdown. These episodes raise the question of how regulators failed to foresee the market failures that are so retroactively obvious.
In fact, experts such as regulators are more prone to bias than nonexperts. They confuse knowledge with beliefs. This gives rise to overconfidence in their abilities, especially when dealing with complex problems, from overestimating their knowledge. Thus, they believe in their ability to control events, such as financial crises, that are inherently uncontrollable. This creates a false sense of confidence and an inability to see, consider or even imagine problems. This was characterized by Alan Greenspan's "shocked disbelief" that banks failed to act in their self interest.
Problems like the Lehman collapse are frequently classified as unforeseeable. In reality, these were rare but predictable events that were ignored. Probably the biggest issue is groupthink that forms within insular, siloed regulatory bodies. This causes regulators to ignore ideas deviating from the group consensus. Furthermore, they try to rationalize away warnings that challenge group beliefs.
A form of social groupthink known as a cascade also can occur. This involves a self-reinforcing collective belief or fad, similar to herding. Individuals suppress their public preferences to maintain their reputations by adhering to popular ideas and beliefs. An example is the belief that the financial crisis was caused by banks, which now need to be constrained by restrictive regulation. Most now realize the cause was more complicated.
These problems are the same that caused overconfident bankers to keep up with their competitors, believing everything was under control based on superior risk management. Unlike market failures, regulatory failures face no sanctions to weed out inefficient regulations. Thus, the unintended consequences of regulatory actions become entrenched. This is further reinforced by the limited ability of regulators to learn from their decisions, given the long lag time between decisions and their consequences. Regulators can improve their efforts to create smarter regulations by recognizing their fallibility. A first step is to invoke rigorous cost-benefit analysis to ensure the cure is not worse than the disease. They should demonstrate that proposed regulations are the best way to achieve their goals. Cost-benefit analysis nudges regulators away from their preferences and toward evidence-based decisions.
Next, the regulations should be periodically reviewed and updated to ensure their continued value by an independent agency like the Congressional Budget Office or the Office of Management and Budget. Third-party review serves to constrain unchecked bias by forcing regulators to justify their actions. This is especially important, given the tendency since the crisis to overregulate.
These observations highlight a realistic evaluation of regulatory limits. They address the fatal conceit inherent in the notion that regulators are immune to cognitive flaws and therefore can shape the markets according to their wishes. Regulators, with their new responsibilities, should not be expected to become wiser than they really are. Otherwise, we will just get costly regulations that we do not need and that cannot work. This could lull markets into a counterproductive overconfidence, ensuring an even more severe future crisis. We need both markets and regulations. Nonetheless, we have to recognize their limitations and constraints on our ability to influence them. These observations are not meant to criticize regulators; rather, they remind us that regulators are only human.