Modeling: JPMorgan, Bank One Marry Their Platforms

While the Street last year debated how complementary the investment banking/retail wedding between JPMorgan Chase and Bank One would be, Tom Martin was analyzing a more difficult coupling prompted by the $58 billion merger.

Processing Content

Martin, vp of performance and capacity testing for retail business services at the newly merged entity, was dealing with major reconfiguration needs of dual IT legacy applications and systems, as would be expected by the emergence of the world's newest trillion-dollar asset bank. Day after day, he put the infrastructure through a variety of tests to see how server applications and systems would react to changes in code and hardware. He swapped out servers and mainframes, and presented heavy data loads that were sometimes 80 percent above standard levels.

Had such system strain been anything other than virtual, the downtime for JPMorgan/Bank One's back office might have been a nightmare of non-working teller systems or bungled access for 25 million on-line customers, among other woes. But by using capacity management and performance-modeling software, Martin and his team put the JPMorgan infrastructure through the ringer only in theory, and pinpointed potential user logjams, compatibility issues and even untapped resources before undertaking actual changes or additions to IT.

"It helped us understand the differences between infrastructure requirements," says Martin of the modeling software developed by Austin, TX-based HyPerformix (and sold to JPMorgan through private labeler Mercury Interactive). Without it, "we would have had to buy the hardware [for testing], or do educated guesses based off of current load results."

The virtual testing used by JPMorgan, which has customers in 90 million households through the Chase and Bank One nameplates, may have been one of the largest applications to date of the burgeoning use of pre-deployment modeling assessments in financial services IT.

Most banks and their vendors continue to buy cheap servers for traditional load testing. But more banks and CIOs are starting to realize the long-term costs of ignoring capacity planning, and the folly of adding hardware that doesn't solve inter-application conflicts, according to HyPerformix CEO Noel Barnard. "We're giving them the ability to make more intelligent decisions about how they buy and acquire hardware," says Bernard. "In the past what customers did was just buy more servers. ...And we've reached a point where that no longer makes sense. It's not responsible."

Glenn O'Donnell, a frequent critic of IT organizational practices at financial services firms, says other complex systems are engineered in this manner. "Build a bridge-you have to model it. Build a skyscraper-you model it. You subject it to all sorts of conditions to make sure it's going to survive under realistic conditions." Yet fewer than 25 percent of applications deployed on distributed systems of IT organizations are effectively modeled and tested for capacity management or performance, according to Gartner.

The shortcoming is often the inefficient use of excess capacity as much as system logjams, says Theresa Lanowitz, research director for applications and development at Gartner. "Performance has been second to features and functionalities...and getting under budget, rather than whether it's going to perform," she says.

Besides HyPerformix (launched 16 years ago as Scientific Engineering Software) firms are turning to outsourcers such as IBM Tivoli, OPNET and Compuware for modeling programs, and analytical solutions from Netuitive, ProactiveNet and Panacya.

Martin was well aware of the need for testing when challenged with the JPMorgan/Bank One consolidation. In a July presentation to a Midwest computer professionals group, Martin noted the new bank had 500 applications in the retail division alone. With a history of little success with first-time installs and multiple failed changes, it was hard to grasp how the bank would achieve goals of reducing IT expenses by $100 million and production outages by 10 percent without testing.

Trying to find new economies of scale at the nation's No.2 bank might have proved unwieldy without the virtual lab constructed for JPMorgan. Martin says the institution modeled several applications across several operations, and validated several components, including teller systems, the branch-server environment, middle-layer servers and mainframe requirements. The modeling also pointed the way to a new two-way server system, instead of more expensive four-way servers-which cost an additional $20,000 a pop.

Modeling is only part of the overall pre-deployment testing process at JPMorgan, which includes load and stress testing service-level expectations and maximum efficiency of applications. Duration testing is also part of the framework of IT validation in discovering potential long-term performance degradation. "It's not just the modeling" involved with capacity planning, Martin says, "it's the end-to-end view of your applications."


For reprint and licensing requests for this article, click here.
MORE FROM AMERICAN BANKER
Load More