Crunching data in byte-size pieces.

Massively parallel processors may give banks cheaper, more powerful computing, but don't expect them to elbow aside the tried-and-true mainframe any time soon.

Terry Sandvik is convinced he's on to something big.

Sandvik, who's president of First Bank Systems Inc.'s Business Technology Center, has just started using a type of powerful mainframe computer called a cooperative multi-processor. Not only does he expect it to save his institution huge amounts of money--but he's convinced that it also meets the industry's future computing needs.

The rapid development of electronic banking services--those that allow customers to pay bills from a home computer or withdraw cash from an automated teller on the other side of the world--are forcing banks' data processing facilities to play catch-up. Banks are being pressured to credit and debit accounts and balance their general ledgers instantaneously in what is called an "on-line, real-time" mode. "That's what the industry is moving toward," says Sandvik.

First Bank's new high-performance computer is an order of magnitude more powerful than the traditional mainframes and their decades-old batch-processing method. In that form of computing, transactions are "memo posted" throughout the day and balanced against the books overnight. Traditional mainframes simply weren't built to keep up with transactions throughout the day; high-performance systems are, and Sandvik hopes to make the most of that power.

Still, except for a few niche applications, high-performance computers are a largely unproven technology, particularly in industries such as banking. First Banks only bought its system--an IBM Corp. System/390 Parallel Transaction Server--earlier this year, so it doesn't have any experience with the system in actual day-to-day production. But Sandvik is optimistic. The machine was tested throughout the spring and summer, and it remained on schedule to start operation this month.

Should First Bank's System/390 pass the test, it will mark a turning point in bank processing, demonstrating that massively parallel processing can indeed handle the type of work that for decades has been done on IBM mainframes.

High-performance systems have actually been around for years. The term refers to all systems whose power exceeds that of traditional mainframes, and it includes super-computers, massively parallel processors, symmetric multiprocessors and cooperative multiprocessors. There are fine technological distinctions among each category, but what they all share is a basic design that relies on series of processors, some of which are identical to the microprocessors in standard personal computers. This allows them to break down huge, numerically intensive jobs into hundreds--and in some cases thousands--of pieces. A traditional mainframe employs but one large central processing unit.

One benefit is a more rapid processing time for certain jobs. For example, Chase Manhattan Corp.'s credit card marketing group is testing a high-performance system from AT&T Global Information Solutions, formerly NCR Corp. The preliminary results show that queries through the database of 13 million cardholders that might take more than a day with a traditional mainframe can be done in under an hour, according to Jonathan Vaughan, vice president for application systems technology in Chase's Corporate Technology & Information Services Group.

A Toehold in Banking

By the late 1980s, a handful of banks had started to find uses for massively parallel processors. In 1986, Bank-America Corp. purchased a system from an El Segundo, CA-based firm called Teradata. This firm was later acquired by AT&T Co., and in March of 1992 was merged with NCR to form AT&T GIS. The firm is the leading source of high-performance systems to banks, according to Ray Ferrara, an analyst with The Tower Group of Wellesley, MA, which will publish a study on high-performance computers in banking this fall.

At least initially, these machines were typically installed in credit card marketing groups, where they searched through large databases of customer information to facilitate direct marketing, a task called data mining. Over time, they've been employed in some capital markets groups, where traders have used them to develop models and prices for mortgage-backed securities and derivative instruments.

But these database searches and analytical tasks have been "off-line," meaning they haven't involved the actual crediting and debiting of accounts and the balancing of the books.

That is the very heart of what a commercial bank does. If parallel processors can perform this task--which is what First Banks is trying to prove--it could be the death knell for mainframe computing as the industry knows it. It would also be a further sign that client/server computer systems are no longer isolated on the fringes of bank processing, but are now assuming some of the core jobs that keep banks operating.

"Obviously, there's a lure to all of this," says Dan Schutzer, Citicorp's director of advanced technology. And that lure can be expressed in dollars and cents. The general role of thumb is that each unit of mainframe horsepower on a massively parallel or a multiprocessing machine costs roughly half as much as with a traditional mainframe.

Computer power is frequently measured in millions of instructions per second, or MIPS. And while this is generally regarded as a less-than-perfect measurement, it gives a rough idea of a system's horsepower.

First Bank's multiprocessing system cost $15,000 to $20,000 per MIPS, says Sandvik. By comparison, upgrading an older ES/9000 mainframe would have cost $35,000 to $40,000 per MIPS. The price differential was compounded by the multiprocessing system's greater flexibility, since it allowed for improvements in smaller increments. A traditional mainframe upgrade might be done in units of 50 to 60 MIPS, whereas a parallel processor can be upgraded in units of 12 MIPS.

Sandvik says that a parallel processor could be upgraded in more gradual--and less expensive--increments of roughly $250,000. Mainframe upgrades might require increments of $1.5 million.

The lower cost is just one factor that makes high-performance computers so alluring. But buyer beware! With their relatively limited track record, these machines pose the same risk that any new technology does. Banks that have installed them so far typically have had to do extensive testing before bringing them on line.

"The lure of replacing our mainframes will be hard to realize," Citi's Schutzer says. "That's the curse of having successful systems. Before I replace a mainframe with another system, I have to make sure I'm not sacrificing anything." Even though IBM now sells high-performance systems as well as mainframes, it too "will be the first to tell you they're not for everybody," Schutzer says.

Nonetheless, some banking information system executives are determined to try. But until they succeed, the grunt work of processing deposits and loans still belongs to the big iron.

For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER