Key challenge: solving the bottleneck in data storage capacity.

The gradual growth of a bank's storage needs as check volumes grow has not been a cause of overwhelming concern - a typical data base entry to store the pertinent information on a processed check might be 900 bytes.

However, technology vendors are slowly introducing check image processing, which could cause radical change in data storage growth.

A graphic image of a check may consume as much as 160,000 bytes, and that's just a black and white image. In the case of say, BankAmerica, now processing some 20 million checks each day, that translates to a whopping 3.2 trillion bytes every day.

Although banks have various schemes to reduce the storage crunch, the problem is still monumental.

Even without the image processing crunch, storage needs have grown so quickly that some industry estimates suggest financial institutions are allocating 40% to 60% of overall data processing expenses to purchase additional storage.

Around the world, approximately $50 billion a year is spent on disk storage for computers.

Storage is big business and is bound to get bigger as the storing of graphic images takes hold in the banking industry. This quantum leap from gradual growth in storage capacity needs to overwhelming growth seems to have caught data processing professionals by surprise.

Of course, new high speed tape drives coming onto the market and read/write optical systems offer new levels of storage capacity. For example, some systems can store five gigabytes of data on a single tape.

And there is talk of storing up to a terabyte - one million gigabytes - of data on one tape using new materials and packing techniques.

But the slow retrieval speed inherent in these systems, which are part electronic and part mechanical, sometimes seems a step backward. Users wanting more information faster fret as unseen robotic arms fetch tapes or disks from cubbyholes to mount them on multiple drives in a frenzy of whirring and clacking. This can translate into employees and customers fuming at systems that grow slower instead of faster.

There is also the increasing complexity of the software necessary to organize large data bases for multiuser retrieval, and the headache of searching through data warehouses bursting at the seams because they use antiquated menu routines.

Organizations like the National Aeronautics and Space Administration, for instance, are receiving so much image information from satellite sensors that they openly express their frustration with the inability to organize and analyze the data. So they simply store it, creating, such huge data bases that chances are the information will never be retrieved.

What this means is instead of having storage subordinate to the needs of the mainframe, storage is becoming a singular specialty.

New executives, with titles like vice president of storage management, are giving as much thought to how data are stored as other executives give to deciding how to crunch the data. We are beginning to see many types of storage devices besides disk drives in a smart bank's storage "warehouse."

But what about the capacity bottleneck? Is there a technology out there that can meet the need for fast, mass storage, at a reasonable cost and floor space requirement?

Some say the answer is a recent idea called redundant array of independent disk drives, or Raid. it is one of the fastest-growing areas in the disk drive market. By using an array of small disk drives - similar to those found in desktop personal computers - and connecting them so a data block is distributed across all the disks in the array in parallel fashion, the technology achieves high-density, fast storage in a relatively small and inexpensive package. Still, it presents some reliability problems that have not yet been solved.

More futuristic technologies, such as 3-D storage using holographic crystals, and "biological" memories capable of storing megabytes in the space of the human cell, still haven't made it out of the lab - despite years of promising whispers.

New tape drives with improved materials and scanning methods promise large capacity improvements in data storage for archival purposes. For example, tape towers and optical disk jukeboxes - devices that move storage media into read stations according to the program call - are becoming faster. With more read stations and more sophisticated software, they now give the appearance of true random-access speed.

But what about mass storage that allows disklike access time, a must for applications like image processing? There is one interesting technology that is just now quietly coming into the marketplace.

Called the Neurex Intelligent Memory System, it was invented by a former Ford Motor Co. engineer named Joe Bugajski. His patented creation stores data like the human brain does. It can store a full terabyte of information in a box the size of a typical refrigerator.

The Neurex system stores data as patterns in a hierarchical network in very fast memory. As the machine reads a data stream being stored, the technology forms a network of these patterns in approximately four to six gigabytes of D-Ram, i.e., very fast random access memory.

Once this network is formed, other data being read are "remembered" as a combination of patterns, rather than the actual bits themselves. The representational "memory," i.e., the unique indicators that can be used to reconstruct the actual bit stream they represent, can be as much as 1/200th of the size of the data block they represent and can be stored on external disk or in D-Ram fast memory for higher access speed.

When the mainframe wants a block of data, Neurex translates this into the virtual location of the dense memory. Neurex finds the stored memory and puts it back through the various levels. The stored dense memory on the top level is associated with several associative memories on the next level down. These in turn point to others on the next level down and so on in a very fast parallel cascading effect.

At the bottom level all of these associative memories call the actual bits they |remember" and these bits go out on the channel. The process of recalling the memories is extremely fast, using standard parallel processing techniques.

This hierarchical recall technique is very similar to the workings of the brain, except Neurex never forgets anything it stores. The mainframe attached to Neurex simply operates as if it's working with a very large disk attached in a standard way.

In addition to the speed and size advantages of Neurex, Mr. Bugajski believes the real power of the technology is its capacity for recognizing complex patterns very quickly while storing. Now that the storage system is in prototype, he says, the company is working on needed intelligent applications such as signature recognition and fraud detection, looking forward to the day when Neurex can detect the "rightness" of an individual's signature as the data are being compressed for storage.

Will this brainlike hierarchical network approach to storing data lead the banking industry into a new era? Nobody knows - yet.

However, one thing is clear. A breakthrough is needed in the storage area if the information age is to keep its heady promises to skeptical users and consumers demanding increased access to information.

A breakthrough is also needed by bankers in strategic planning for future storage needs. Emerging technology seems to suggest a multitechnology storage solution many times more complicated than simply adding another disk drive.

This means bank executives must become educated on the benefits and pitfalls of new storage technology. After all, technology experts would not have invented the words petabyte (1,000 terabytes) and exabyte (1,000 petabytes) unless they had an application in mind. Although such applications may not come to banking soon, it makes sense to prepare early for the new mass storage reality.

For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER