Mark Hurd, president of Oracle Corp., recently told a group of financial services executives an anecdote about a large bank customer that required 9,000 applications to keep its old core system running.
"Most of those applications are old and homegrown, and most of the people who developed them are gone," Hurd said during a February presentation in New York.
It was meant to be a cautionary tale about how little banks actually get for their bloated IT budgets, and how cranky and awful their legacy systems are. And though it was also a subtle pitch for Oracle's modern core system called Flexcube, much of it was a truism.
Many bank legacy systems were built as long as 40 years ago — but their age does not make them worthless. If the systems were really that bad, banks would make critical errors all the time. And, generally speaking, they don't.
But they do make mistakes, and these mistakes happen when increasing business demands drive decisions that exceed what these systems can do.
Unfortunately, those demands keep coming. Banks are under mounting pressure to increase profits by churning out newer products and services.
"The old systems run like the wind; they are lean and mean and fast in terms of core processing," says Steve Ledford, a partner at Novantas LLC of New York. "The biggest challenge facing banks that rely on these systems is the need to respond to change quickly, whether it is from marketplace dynamics and increased competition or regulatory dynamics, or the demands of a world that is moving faster."
Core systems are incredibly complex. In the largest banks they are vast machinery responsible for the financial life of the institution, including deposits, withdrawals, and all the channels that give customers access to their accounts, like the call centers, the branches, the ATMs and now Internet and mobile banking. Core systems can span continents, interacting with thousands of employees and millions of customers every day.
And as Hurd rightly pointed out, such systems have often evolved to encompass thousands of applications at the top global banks.
What's referred to as a legacy system has little standardization and actually varies greatly from bank to bank. (There can even be local variations within individual banks, experts say.) Legacy systems were created in the days before off-the-shelf software existed, or even the software shops that create such things today. They were designed for old mainframe environments, and they rely on sometimes antique programming languages and tools like COBOL, CICS and Assembler.
"These systems were designed to reflect the old reality of how banks did business in the '70s," says Bart Narter, senior vice president at Celent. Then, banks accounted for deposits and withdrawals in batches at the end of the day. Today, they are expected to operate in real time.
When banks consider replacing their legacy systems, they take the prospect very seriously. It can often take the largest banks more than five years to go through an upgrade, and it can cost hundreds of millions of dollars. Some recent core systems upgrades have even cost as much as $1 billion, says Jost Hoppermann, vice president, banking applications and architecture for Forrester Research.
On average the upgrades take 40% more time, and they are three times more expensive, than originally forecast, according to Forrester Research.
Upgrades are further complicated because banks can't have any downtime. And given the length of time it takes to make such changes, the business directives are often in flux, making a core upgrade project seem as if it's unfolding on quicksand.
"The business requirements that are in place at the end are often not the same as when they started," Hoppermann says.
Mergers and acquisitions are the key drivers for legacy system upgrades, experts say. Most of the top banks have merged numerous times with other banks over the past two decades, and each merger has required patching together disparate systems that can't communicate with each other.
Although banks often perform a kind of mating dance to determine which system will win out, it is usually the acquired bank that needs to make all the changes.
Legendary examples of botched upgrades include First Union Corp.'s merger with CoreStates Financial Corp. in 1998, during the first wave of bank megamergers. It was handled so poorly that CoreStates lost about 20% of its customers before the merger was completed. The banks encountered big problems getting the legacy systems to communicate with each other, which resulted in inexplicably blocked account access and payments not being applied to loans, among other problems.
(When First Union bought Wachovia in 2001 and took its name, the systems merger process was a study in contrasts to CoreStates. It took years to complete, but it has generally been lauded for its masterful efficiency.)
In today's environment, where there are fewer huge mergers, the technology challenges for legacy systems are potentially just as extreme. In addition to increased regulatory and compliance requirements, banks have more business demands to get new products, like mobile banking, online banking, remote deposit capture and person-to-person payments, in customers' hands as rapidly as possible. And these new developments also require near real-time updates to accounts.
Each new demand means a new application must be written to access the core system. And with each new application, stresses increase and scale issues emerge. Applications that work for 1,000 customers may shut down when a million people try to use them.
Chief information officers are then stuck in an uncomfortable position, experts say. They must either deliver the goods quickly, which means creating another application or another patch to the legacy core system, or they must try to convince business and marketing executives that they should wait it out while IT builds a stable, well-integrated platform. And the latter rarely happens, experts say.
Meanwhile, some banks have done such a poor job integrating core applications that they still turn to manual systems to complete processes.
In the 1990s it was common for multiple systems to process the same information for one account, which often led to differences in the way banks reported basic things, like account balances, experts say. Some large banks still have these problems, and though most have now consolidated back-office operations to single systems, these systems may accidentally use different sets of business logic, or have inexplicably different data inputs, or simply have been so poorly maintained that overseers thought upgrades were made, but really weren't.
"I have seen teams in these old legacy systems manually transferring information from one system to another, and I even know one bank that accepted this situation and created a business case for it," Hoppermann says.
The top banks know they need to upgrade their legacy systems. And many opt to do it in phases. Often they enlist the services of core vendors like Fidelity National Information Services Inc., Fiserv Inc. and Jack Henry & Associates Inc. Smaller banks use these companies for complete and integrated core services. The larger banks tend to use them to create middleware layers for key core applications or to develop new applications in a service-oriented architecture (SOA) environment.
"The largest banks are looking at the challenges with their infrastructure … [and] they are looking at how they can SOA-enable their core system and solve their business needs as they arise," rather than do a rip and replace, says Anthony Jabbour, executive vice president for FIS Financial Solutions Group in Jacksonville, Fla.
(FIS provides core products and services, including an integrated core platform, to half of all U.S. banks with assets of $50 billion or more.)
The key question boils down to the trade-off between cost and benefit, as well as determining the optimal core environment, Jabbour says.
"We have a very competitive banking environment that is under stress right now, and there is no bank that has unlimited time or money to address these issues," he says.