LEGACY SYSTEMS are a vital but maligned component of bank technology. They are responsible for most of the industry's transaction processing, the maintenance of core accounts, and possibly 90% of debit and credit posting.
Cumulative investment in legacy systems probably exceeds $100 billion. Decades of features built by tens of thousands of programmers have made these systems the repository of knowledge on how to process banking transactions.
At the same time, legacy systems are far more costly, more inflexible, and less useful than they would be if they were rebuilt using new technologies. Yet progress in converting legacy systems to more modern technologies has been agonizingly slow. Less than 0.1% of the industry's mainframe systems have been "downsized," one high-potential approach to converting a legacy system.
Several well-known examples of attempts to modernize have failed spectacularly, such as Westpac's $100-million-plus investment in a set of new core banking systems. Even successful examples of modernization, such as State Street's Horizon system, have not spurred movement in this direction.
No wonder the subject creates such frustration.
We define a legacy system in terms of function. If it is a very important system to the bank and is difficult to work with and change, then it is a legacy system. In practice, most are older and likely to be mainframe-based. Many are written in older languages (generally COBOL, but in some cases assembly language) and run on IBM mainframes. They are usually large systems and often have millions of lines of code.
Specific problems with legacy systems include:
* There is a lack of documentation because the system has been changed and enhanced many times over the years. Usually no one has kept track of all the changes.
* The original design has degenerated from one that was elegant or at least sensible to one that no longer makes sense.
* The architecture may be 25 years old and wildly out of sync with today's environment.
* There is a long backlog of system enhancement requests. There may even be an "invisible backlog," which is an accumulation of user needs that have not yet been turned into requests. * Adequate functionality may be lacking, and because of the cost and time delays, may not be restored anytime soon.
For example, one major bank has a wholesale business that is completely dependent on a very large legacy system. The system was purchased six years ago. The original vendor's financial difficulties have severely reduced its ability to provide promised enhancements. The user has consequently paid millions each year to an outside systems integrator to enhance the system.
Despite the problems, legacy systems offer benefits too. The principal ones include:
* They already exist. At the end of the day, this is the reason why we continue to use them.
* They may have a lot of functionality with numerous options, lots of reports, high configurabilty, built-in controls, industrial-strength standards, and high security levels.
* They can be very sophisticated in servicing the value-added business needs that derive from today's complex financial banking world.
* They can generally handle high volumes efficiently, a business requirement that in some cases would argue against conversion to a smaller platformform with less throughput.
* They have generally been proven to work, through years of experience and testing. Moreover, the interfacing o each legacy system to that bank's other systems (including other legacy systems) is already accomplished.
Legacy systems can be extremely valuable, and many banks depend heavily on them. And some vendors' systems with legacy attributes are doing very well in the marketplace.
Computer Power Inc., for example, is essentially one high-throughput mainframe system with extensive functionality. About 42% of the country's one- to four-family mortgages are processed on this system. The system has been well-maintained, since it is such a valuable investment, and may or may not be considered a legacy system.
Yet, it is a COBOL system, with very old roots. It is hardly new technology. Begun in 1969 for a very modest investment, this system -- and its customer base -- was last sold in 1991 for $300 million.
The 15-year-old Hogan system is another example. As a package, it has some of the attributes of a legacy system, such as being written in assembler and COBOL, with a mainframe orientation and configurability. Yet Hogan is doing better today than ever. Profits were up 100% in its latest fiscal year, and the company had no debt.
Hogan's IDS package for demand and time deposit accounts has 110 licensees worldwide and is a leader in the bigger bank marketplace. In short, maturity does not necessarily negate value.
The banking industry must learn to live with its legacy systems. Progress in eliminating them through various methods has been agonizingly slow. About 3% to 4% of the total number of legacy systems can be "resolved" yearly. By "resolve" we mean either the complete elimination of the system or a porting, conversion, or re-investment the system so that it is much easier to deal with and maintain and therefore no longer deserves to be called a legacy system.
At the same time, some legacy systems are becoming larger and are doing more of the industry's work. They may even become, like the CPI system, irreplaceable industry assets.
Nevertheless, we predict that the number of legacy systems will decline. The following reasons are leading to this decline:
* Acquisitions. Generally, an acquired bank loses its technology and any legacy systems it has. However, the acquiring bank's legacy systems usually just absorb the workload of the resolved systems.
* Conversions. Purchases of packages (like Hogan) or conversions to third-party processors (like CPI) to replace an in-house legacy system generally mean that more accounts are being concentrated in fewer legacy systems.
* Restructuring technologies. Software and methodologies are available today that, in theory, allow older code to be "restructured." Depending on the circumstances, the benefits could range from documenting the old code to actually generating new code that is more efficient and maintainable. Generally, usage by banks has been low because these techniques haven't been proved effective with large transaction processing systems.
* Client-server technologies. Distributed platforms like Unix, OS/2, or Windows NT will improve with time and should become more capable of replacing legacy systems.
* Reinvestment. An increased rate of reinvestment in new systems, fueled by higher industry profitability, will also help.
Thus, we estimate that the number of legacy systems will decrease because of these factors at an accelerating rate into the next century. Today there are about 5,000 legacy systems in use in U.S. commercial banks. We predict that by 1995, that number will be reduced to 4,650 and by the year 2000, U.S. commercial banks will have only 3,410 legacy systems. In the year 2005, there will be 2,010 legacy systems; in the year 2010, that number will drop to 1,030; and by the year 2015, there will only be 480 legacy systems still in use in U.S. commercial banks.
It is worth noting that many of today's newer systems may be completely outmoded by the year 2015 and may well be considered "legacy" under the standards of the future. We are used to saying that all new systems are obsolete the day they are built. But perhaps the moral of the story is that technical obsolescence, in the form of today's legacy systems, does not mean of no value.
Banks must therefore plan a future for their legacy systems. The first step is to determine, as part of the strategic technology planning process, which of a bank's legacy systems are destined for survival. The objective is to rank and prioritize them.
The second step is to install both front and back ends around the legacy systems that are earmarked for survival. Many legacy systems are worst in their inquiry function and their inability to give bankers cross-customer data drawn from multiple systems. The front ends, built using networking technologies, isolate the legacy system and greatly improve the ability to keep using it. The back ends allow better manipulation of decision-making data.
The third step is to carefully add new products or services to the surviving legacy systems. These additions must be driven by the needs of the business and be capable of sustaining reinvestment.
Eventually, the legacy system can be ported to a client-server technology, even though this may not occur until well into the next century.
The bottom line is to move carefully. Planning for the future is essential and this planning should be neither utopian (get rid of all the legacy systems) nor myopic (they'll last forever). The key, as usual, is informed evaluation and decision-making.