Expanding its use of an increasingly popular computing scheme, BankAmerica Corp. is using parallel processing systems to better manage its wholesale banking business.
Parallel processing systems consist of clusters of microprocessors tightly linked together to share the same pool of main memory and tackle complex computing tasks.
San Francisco-based BankAmerica has long used massively parallel processing to house and mine its vast stores of data in the retail area and to perform various computing-intensive tasks such as credit card marketing and loan evaluation.
Now, the banking company has installed three new parallel processing servers to aid in decision support in various areas of the wholesale side. The trade and finance, capital markets, and cash management groups will be using parallel processing to do regulatory reporting, profitability analysis, and portfolio management, according to Henri Tello, a vice president for wholesale banking.
These three servers will be placed in separate areas. One has already been set up to test and develop various applications, another will be used by business line managers for processing real queries, and the third will be utilized by the bank's advanced technology group.
Most banks and financial service companies are just lately starting to realize and harness the power of these very fast and powerful computer systems, which can hold billions of bytes of data and run hundreds of times faster than more common large-scale computers.
For example, Chase Manhattan Corp. started piloting massively parallel processing in its credit card division last summer, and Fidelity Investments has also been an avid user of such systems.
Unlike the traditionally more powerful massively parallel systems BankAmerica has espoused in the past, the new servers are constructed in a "symmetrical" fashion.
Massively parallel processors differ from symmetrical multiprocessor systems in two basic ways: each processor, or node, within the massively parallel system has its own dedicated memory, as opposed to the completely shared memory that typifies classical symmetrical architecture; and massively parallel systems have a generally more scalable interconnect between processors.
But, according to Mr. Tello, the challenge of scaling the symmetrical systems has become less of an issue lately, as the technology driving these systems catches up with the related massively parallel architecture.
"The shared-nothing architecture (of massively parallel systems) causes a lot more complexity," Mr. Tello said. "It seems the SMP (symmetrical multiprocessing) environment has matured faster."
Indeed, simplicity is one of the benefits of the symmetrical multiprocessing architecture, according to Richard Winter, president of Winter Corp., a Cambridge, Mass.-based consulting firm that specializes in large-scale data base technology.
Another upside to these more limited, shared memory systems is their stability. Symmetrical multiprocessing has existed in the scientific and commercial realms longer that massively parallel systems and, in fact, still accounts for the "lion's share" of the commercial parallel processing market - about 80%. Being the more mature technology, it has become "the programming paradigm that software and tools have been built for," Mr. Winter said.
He said symmetrical multiprocessing systems may find a following among banks and financial service companies that would use the technology for on- line transaction processing and other relatively simple applications for which the shared memory would be more efficient.