Taking Off: Core Banking Functions on Client/Servers

An increasing number of banks - most of them community-size - are embracing client/server computer architectures for core banking applications.

Client/server systems consist of groups of personal computers that can use a computer hub called a server to jointly work on processing tasks. By pooling the resources of many PCs, client/server systems can process more cheaply and often more quickly than systems based on mainframes of midrange computers.

Banks have been cautious about using client/server architectures for their most important core banking functions, but there are signs that is beginning to change.

Though only about 60 banks use client/server technology for processing of demand deposit accounts and other core functions, nearly 200 are expected to be doing so by the end of next year, according to the Phoenix- based consulting firm M One Inc.

"Client/server is moving from an introductory to a growth stage," said Steven Williams, managing director of M One.

"A year or two ago, banks were still leery about client/server's viability and risks, but banks have gotten over the 'will it work' stage and moved on to 'is it right for me'" he said.

A growth in available software applications has fueled client/server acceptance.

Eastpoint Technology Inc., Bedford N.H.; Perot Systems, Dallas; and Phoenix International Ltd., Maitland, Fla., are among those providing core banking software. BankWorks Inc., based in Atlanta, and others are expected to join the list soon.

Mr. Williams said that because client/server technology "has not yet scaled up enough to compete with mainframe processing," the only banks client/server can fully support are small ones.

"Rarely do we see a technology that only the small guys can do. This presents an opportunity for small banks," he said.

The experience of the 60 banks using client/server computing is convincing many of its benefits. But banks still have a long way to go to fully understand distributed computing, Mr. Williams said.

"For 30 years, vendors have worked on mainframe computing to the point where the systems operate almost flawlessly," he said. "We're only five years into client/server, so in this sense, it's more unreliable."

One bank reporting a good experience with a client/server system is Seneca Falls (N.Y.) Savings Bank.

The $80 million-asset bank last year switched its core processing operations from a mainframe-based service bureau to a client/server system provided by Glastonbury, Conn.-based Open Solutions Inc.

Since the October conversion to the system, the bank has experienced two major systems crashes.

But the fact that neither resulted in any data loss or system downtime affirms one of the supposed strengths of client/server computing: the ability to recover quickly from disaster.

In both incidents, which involved the failure of data base servers, the bank was able to safeguard its information using data replication software running in the Windows NT operating system.

The replication software, from Octopus Technologies, Yardley, Pa., continuously backs up data to secondary servers so that if a crash occurs the bank can automatically kick over to ready backup units.

John Talbot, first vice president and director of operations at Seneca Falls Savings, said the client/server environment is proving more reliable than the bank's old mainframe environment.

"With the mainframe, if you're not down a couple of times a week, it would be rare," he said. "Downtime was a much more frequent event."

He said the client/server network makes it easier to recover data when a mishap occurs.

"With a mainframe, recovery is limited to what data was dumped to an offsite backup location, and you only dump files once or twice day," he said.

Mainframe data recovery involved restoring the previous day's data from a backup tape and then manually keying in transactions posted on the date of the crash.

Mr. Talbot admitted having reservations about moving to client/server, because - in contrast to large-scale computing - "there was no track record to fall back on."

But the bank decided to go ahead with the switch because of the growing industry support for distributed computing and because the bank could acquire system redundancy fairly cheaply.

Ms. Tucker is a freelance writer based in Hazlet, N.J.

For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER