A few years ago Citigroup's Capital Markets division faced a problem that most large trading enterprises were also confronting: an ever-increasing need for processing power to handle the risk management, market analysis and pricing applications that make the capital markets division tick, but limited space and budget to expand their data centers.
The rub was that except at times of peak demand, processor utilization hovered around 30 percent, says John van Uden, director of infrastructure and FICC Shared Services Technology at Citigroup.
Now, four years into its implementation of a massive grid computing project, Citigroup has grown its grid far outside the traditional batch processing function into a fully developed shared services model.
And as grid computing evolves in the financial services industry, retail banking and card units are climbing aboard; van Uden envisions a day when many institutions will push their grid computing into the cloud.
To be sure, grid computing isn't new, or news. What is new is the extension of the grid beyond the trading floor, across the enterprise and into the retail banking division, as the need to increase utilization and decrease costs take precedence in many organizations. "Almost all our customers are having the dialogue about what is mission critical infrastructure going to look like in the future," says Ivan Casanova, vp of product marketing at a leading U.S. grid computing vendor, DataSynapse. "They were looking at a very specific class of applications that we were grid enabling in the first wave. Now customers are saying, 'This would be great if we could run our standard apps on this.'"
"Initially it was a performance issue [for most clients]," Casanova says. "But that's changed, it has really evolved into more of a cost-driven issue and a utilization issue."
Citi's implementation is a prime example of maximizing utilization. The capital markets group had roughly 10,000 CPUs globally, and data center space was at a premium, not to mention the pressure to keep power and cooling costs down. Citi chose Platform Computing's Symphony grid product and embarked on a four-year path to consolidate its computing assets into a single resource pool with dramatically increased utilization.
But the plan didn't stop there. One of the laments in the green IT movement is that user groups have no idea how much power they consume, and have little incentive to lower that rate because tracking and billing is not standard procedure. At Citi, since the grid was implemented, individual business units are charged for the processing power they use, creating a shared services environment. Citi now runs a computing cluster that spans three continents and multiple business lines, including equities, fixed income, and back office, van Uden says.
"The challenge you get now is businesses with different priority orders, [and we are] trying to do priority orders across three business units that don't typically do priority across themselves," he says.
As each business unit was added to the grid, "We spent a lot of time explaining our cost pools," he says. "With every new client we'd have to explain why our rate was the rate. And I'd hear, 'Well, I talked to the guy at Goldman and he charges X.'"
Since then, Citi has seen exponential growth in the number of CPUs in the cluster and is now near 20,000, van Uden says; there are periods of the day where the utilization rate is 100 percent. The extension of the grid across the enterprise will likely follow a predictable evolution now that "most of the banks have now done the apps that land naturally on the grid," he says.
With internal customers used to paying for what they use, look for excess demand to be sent into the cloud. "There's some stuff we wouldn't mind pushing out to somebody else's data center," van Uden says. "I think we're still going to have data centers; I think will just have an overflow [model]...it's not cloud computing, it's cooperative data centers."
Some banks may already be using a data center in the cloud without realizing it. Sun Microsystems has gotten into this business with its Network.com division, a pay-per-use utility that charges $1/per CPU hour with no contract costs. Independent software vendor CDO(2), which offers a collateralized debt pricing application, connects users of its app to Sun Microsystems' Network.com utility behind-the-scenes.
Users don't have to login to Network.com, yet they have access to as much CPU power as they need to get the job done. "As far as the users are concerned, they've updated an Excel spreadsheet, waited for the work to come back, and it's come back in maybe 10 seconds, but that work may have been done on tens or hundreds of different CPUs running on Network.com," says Gary Kendall, Director, CDO2, in a Sun Webcast. CDO(2)'s is the only financial services application currently running on Network.com; most of Sun's penetration in this area has come in life sciences and computational mathematics.
Amazon's EC2, and a new competitor, GoGrid, are among other first movers in the data-center-in-the-cloud category. If there's any question about the demand for this, check out GoGrid's self-reported customer metric: zero to 1,000 paying customers in four months since it launched in March.
Currently servers in the clouds are primarily used by developers to speed the application-building process. That is likely to change as more vendors get into the business; Deloitte estimates the cloud computing market will reach $95 billion by 2001.
But the evolution of grid, outside the trading floor, across the enterprise, and perhaps soon into the cloud, still gets back to the basics of what made grid computing crucial in the fist place. Says Jim Mancuso, general manager of Platform's financial services business unit. "Take a seven hour job and make it run in seven minutes." (c) 2008 U.S. Banker and SourceMedia, Inc. All Rights Reserved. http://www.americanbanker.com/usb.html/ http://www.sourcemedia.com/