The use of big data, so much on display these days as headline-grabbing Orwellian Big Brother technology, has somehow escaped the global framers of our reordered risk regimes.

When we look at the entire set of risk mitigation concepts that have been thought of since the financial crisis, we see a basic pillar of risk management emerging. It is more side pocket money for a rainy day – whether in shared risk capital pools or in the set aside of more capital by each institution. This is an expected and reasonable reaction of regulators and legislators following the best practices of the past.

But in this scenario, transparency in the main has been left aside as a regulatory tool, perhaps because it is too encumbered with technology that defies understanding by economists, lawyers, legislators and those financial regulators and business executives who run our financial system.

Without a technology mindset, regulators, financial industry members, thought leaders and politicians have set centralization of risk and stress testing of boundaries of capital as the standard for regulators and for the business of financial industry risk management. Such noted thought leaders as Paul Volcker, Eugene Ludwig, William Isaac, Tom Hoenig and Neil Barofsky have of late called for such in varying forms. All are merely looking to tweak the regulatory architecture of a failed system to incrementally improve it.

Dodd-Frank, Brown-Vitter, Corker-Warner and other two-named legislation of earlier and recent vintage have similarly tweaked the best practices of the past. Even the most touted "innovations" are simply best practices of yesterday's thinking – central counterparties are a century old "innovation"; swap exchange facilities apply old ideas of exchange trading of long dated options; contingent capital and stress tests are as old as the capital and contract markets they are supposed to be buttressing.

Much of what passes for new is increments of change on top of old thinking, if not simply warmed over ideas of the past made palatable by the rush for palliatives for our financial crisis.

A prime example is the resolution and recovery concept for dismantling a too-big-to-fail bank. A "living will" is to be prepared by systemically important financial institutions to provide guidance to regulators so they can dismember them. That no one knows how they had been built is apparent to everyone. That the blueprint for these financial behemoths was missing is unquestioned.

How then can regulators guided by a hastily prepared living will, or an increased capital buffer that is no more than a measure for counting down to failure, actually use these devices to dismantle or recover TBTFs from serious capital depletion or failure?

A living will requires the drafter to have a full inventory of assets and liabilities, systems and interconnections, as well as all entanglements with outside facilities and organizations. Surely we will pull the wrong brick and topple the entire edifice if regulators are asked to execute that will. This is not a practical recipe for resolving troubled banks, and certainly not a recipe for improving them. Shouldn't we want to preserve the good of being big, global and diversified if society can manage their risk exposures and support their stabilizing effects on economic order?

Best to place society's bet on slowly reengineering TBTFs and making their transactions completely transparent so that they can be monitored in real time by computers. This effort is made more doable now that big data technology is real and the Group of 20 financial regulators has approved a long-missing standard of global identification for financial market participants and the products they own, trade and process. It is amazing that the industry and its regulators survived this long without such a means to aggregate and view financial transactions electronically.

With financial market participants and products uniquely identified in a standard way, the power of Internet-like data mining and Big Data pattern matching algorithms can be deployed. Technology's aim here would be to seek out triggers of systemic contagion across the interconnected financial system, much like we now are capable of finding nearly anything, anytime across the World Wide Web.

Writing recently in an op-ed in The Wall Street Journal, a law professor named David Skeel suggested harnessing a bank's own superior expertise by giving it the option on "how best to downsize" itself. To our mind this should be restated as "how best to re-engineer" a bank.

Regulators should demand a blueprint in the form of systems specifications including anchors to each SIFI's database. This would be done so regulators' computers can be programmed to allow transactions to be observed and the resulting positions and their hedges to be monitored. Doing so would result in transactions that are transparent and allow regulators to see that which they are mandated to oversee. It will also preserve those financial institutions that control risk and serve global markets, and allow regulators the ability by computer means to monitor their transactions in real-time.

Not to do away with more side-pocket money for a rainy day, more progress toward achieving a transparent bank could be accompanied with less stringent capital requirements. A better way, I think, than regulators presiding over the deaths of the Too Big to Fail banks.

Allan Grody, the president and founder of Financial InterGroup Holdings Ltd., is writing a book entitled "Reengineering the Financial Industry."