To Turn Data into Information, Only Client-Server Will Do

The widespread and growing acceptance of client-server architecture in banking reflects two distinct phenomena. The first is that client-server technology has proven sufficiently robust and economical to perform the most demanding banking applications. Client- server systems are already in stable operation for such mission-critical applications as branch automation, trading and securities processing, and decision support. The number and variety of successful installations are increasing daily. The second phenomenon is that client-server architecture emphasizes the very processes that bankers are now demanding from their technology systems: vastly improved analysis and communication of information and faster, more productive applications of this information to solving essential business problems. Distribution of information is the fundamental principle of the client- server design. Traditionally, information technology systems in banking have been misnamed and misunderstood. The huge mainframe systems that absorb most of large banks' information technology budgets actually offer little information to managers. The truth is that mainframes are not designed to give people information but to do essential transaction and accounting processes that constitute banking's nuts-and-bolts operations. They are, in fact, production systems, analogous in almost every way to factory assembly lines. In recent years, bankers have come to understand the real nature of transaction-processing systems and have begun to develop separate and independent systems designed with the primary mission of providing information to support decisions. The evolution of these real information systems has been greatly enhanced by the development of client-server technology. As these new technology tools grew in capability, it became obvious that they were far better suited to the needs of business analysts and financial managers than mainframes were. Enthusiasts predicted that client-server technology, with its inexpensive and scalable processing power, would quickly drive the mainframe from the banking marketplace. Skeptics predicted that client-server systems would prove too complex and unreliable to drive critical banking decision-support applications. Experience has proven both of these extreme views wrong. Client-server technology has been successfully adopted for a variety of banking applications, including branch automation and customer service systems. Decision-support systems using client-server architecture are performing well. But installing these systems has proven more difficult and demanding than anticipated. Part of the difficulty is explosive growth in the managerial processes these systems are intended to support. As soon as good decision-support information and techniques become available, managers think of additional ways to apply them. The process accelerates so rapidly that the information appears to be addictive. The more we get, the more we want, and the more quickly we want it. Development of relational data bases, which readily allow information to be augmented and applied in unanticipated ways, has only added fuel to the fire. One of the most productive techniques for transforming information technology systems into information-producing systems has been development of relational data bases into data warehouses. Under the data warehouse concept, data required for decision support are extracted from operational systems and consolidated in a single, fully reconciled relational data base. This is especially useful because, when properly applied, the process creates a set of numbers that constitute a unified and incontestable financial reality for the whole institution. This benefit alone may justify the costs of adopting such an integrated data base. It is the only effective antidote to the data chaos prevalent in so many banks. The architecture of the client-server system lends itself to this application, since the server is a logically and technically appropriate platform for the data base. But the rapid growth of these data bases has placed enormous demands on the supporting servers. At present, the most viable strategy for meeting these demands is the use of parallel processing servers. Banks are already installing decision-support systems with symmetrical multiprocessing server platforms that can support data bases of up to several hundred gigabytes. In some banks, these systems are being installed as a shared platform infrastructure that will support other data bases as well as the decision-support information warehouse. As impressive as these systems and data bases may seem, they will probably appear modest within a few years. The banking industry is subject to forces that will encourage dramatic growth in the uses of information and the size of supporting data bases. The first of these forces is the industry trend to growth and consolidation. With deregulation, banking capital markets have become increasingly efficient. Successful banks will grow rapidly and continue to absorb less successful competitors. A few large banks now number their customers in tens of millions, and most regional banks will have at least several million accounts. Banks will also continue to offer a broader range of products and distribution channels to their growing customer bases. Many banks now offer products that were nonexistent only a few years ago. Electronic banking systems will create ever newer channels and products for both consumers and commercial customers. The second force is that, as noted above, bank managers will apply their growing decision-support capabilities to a wider and wider range of applications. Now that they have the power to measure the profitability of business units and products, banking analysts want to extend their analyses to customer relationships and channels of distribution. Bankers will also want to extend their range of decision-support analysis to include external information, such as population statistics or competitor intelligence, to improve the efficiency of marketing efforts and to satisfy regulatory requirements. The third major force is perhaps the most important: delivery of decision-support information to tactical decision-makers in line business units. This distributive process has already begun at many banks, where budgeting and planning are being pushed down to the lowest-level business units, such as the branch office. As soon as reliable product and customer information profiles become available, this information may be pushed down even further, to individual platforms and even to the teller line. The net effect of these forces will be that even relatively small, specialized, and efficient banks will be maintaining very large decision- support data bases. They will also be offering a broader range of analytic capabilities to a much wider range of users. The increases in data base size, range of applications, and authorized users will generate enormous demands for processor capability. Already, some large banks are anticipating that their decision-support data bases may reach sizes of one terabyte or more. It is unclear whether symmetrical multiprocessing servers can adequately support data bases of this size. Servers based on the alternative massively parallel processing technology, which can incorporate several hundred processors within a single platform, are supporting very large data bases in other industries, such as transportation and government. At least two banks are now installing such servers to support data bases of 500 gigabytes or more. Only very large banks, with assets of $30 billion or more, are now adopting data bases that will reach into the terabyte range. For these banks, the choice of server technology is already critical. Note that the choice here is not whether to adopt client-server architecture for terabyte data bases. It is deciding the proper technology to use within the client-server framework.

For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER