Data Centers Heat Up

The energy consumed by data centers around the world, according to some estimates, will reach 2 trillion kilowatts per hour by 2020. Data centers consume about 2% of all power used worldwide. But several trends are converging to make data centers, the engines behind corporate IT, more efficient and less energy draining. Server, storage and network virtualization and the movement of some computing tasks to public clouds are helping reduce data center power needs. And forward-thinking data center managers are letting temperatures rise in their facilities as high as 85%, enabling dramatic savings.

Since the completion of the Wells Fargo-Wachovia merger in February of this year, server virtualization and data center efficiency have replaced integration issues as top IT priorities for the San Francisco bank. In the past few years the $1.3 trillion-asset bank, under the leadership of Jim Borendame, executive vice president and head of compute platform services, has deployed server consolidation and virtualization, de-duplication, and desktop and power management to reduce energy use, while also reducing the number of satellite data center locations. He plans to close 12 data centers over the next couple of years.

In 2008, the bank opened a power and thermal optimization lab to test how it can use equipment more effectively to save space and power and improve server and storage utilization. "We've given our storage engineering team quite a bit of feedback on how they are consuming storage on a per-terabyte basis," Borendame says. "Right now, I can get 178 terabytes in a square foot, and have increased the power efficiency of that footprint by 85%." In the previous generation of technology, a mere 93 terabytes could be stored per square foot.

The bank is not using super high-end equipment - it's got standard Hitachi, EMC and NetApp storage devices -- but it's engineering and configuring the machines in a way that drives utilization up. "We've also lifted the utilization of our storage -- we're now using more of the available capacity through provisioning and leveraging some of the virtual storage mapping capabilities," he says. "We've found a quirk of VMware -- people tend to over-allocate virtual CPUs and virtual memory." In other words, they ask for more computing resources than they need. The bank uses the reporting and analytics in VMware's vCenter to find these discrepancies and trim back, and reaps substantial increases in capacity and performance in VMware, Borendame says.

By using de-duplication (a method of reducing storage needs by eliminating redundant data), the IT team has been able to load up storage arrays. "It's extremely effective; it's helped us change our hardware configurations in noticeable ways," Borendame says.

The bank has also been steadily getting more work out of its servers. "In 2011, for every physical server we installed, we took out 2.8 physical servers on average, which is a big deal," Borendame says. "Just in the month of April, we took out 816 servers."

Before a server is refreshed, IT staff take a good look at whether or not that server is still needed, he says. "If the server is physical, we do everything we can to take it to a virtual environment." About 80% of new servers at Wells Fargo are virtual servers. The bank's overall server virtualization rate is about 38%.

Pre-merger, power consumption for the Wachovia and Wells Fargo data centers was growing 17-19% annually. "We've been able to go to zero and we're starting to shrink the footprint," Borendame says.

Because the data centers represent about 15-17% of the power bill of the entire company, "when you stop growing like that, you help the whole company in its green mission," Borendame says. "This year we'll be at zero to negative growth in our data centers." In 2009, Wells Fargo promised to reduce its overall greenhouse gas emissions by 20% within 10 years. At the end of 2011, the company had already achieved a 12% reduction, so on Earth Day Wells Fargo announced that its new corporate goal is 35%.

One effort that's helping is the bank is targeting a temperature of 80 degrees Fahrenheit in its core regional data centers. "We're not planning to do it outside of that space because it's not easy to manage in small spaces," Borendame says. Today, a couple of the data centers run at 75 degrees. "We've asked them to do it very slowly so that we don't create thermal stress in the electronics," he says. "I'm very comfortable that we'll be able to operate at 80 degrees." In fact, some new technologies the bank has installed in the past couple of years can operate in the 90 to 100 degree range.

The risk of running higher temperatures, in Wells Fargo's experience, is not so much electronic breakdowns but material changes, such as problems with insulation, wiring and connectors. Some older connectors, for instance, will corrode as temperatures get higher.

The higher temperatures don't present a problem for the people working in the data centers because there aren't any; these are lights-out facilities. "The only time people are in there is if somebody is doing repair," Borendame explains. The amount of discomfort is therefore limited to short periods of time. "We did check local ordinances for each of the core and regional data centers to see if there were any special local considerations we needed to respect as well," he says.

Because Wells Fargo has shrunk the contents of its data centers so much through the use of virtualization, the UPS systems and cooling systems have been reconfigured to be more efficient. "Just in one data center we took $80,000 a year out of the electric bill by adjusting the cooling systems," Borendame says. "That's before we started moving to 80 degrees. We think this year the changes we're making will take almost $2 million out of our electrical bills."

Much of this work is the foundation for building an internal cloud, in Borendame's view. "Virtualization is the entry card to cloud computing," he says. "It's not easy for a financial institution that's highly regulated to use hybrid or public clouds and still be regulatory compliant. There are some providers that are beginning to realize it's a very different dynamic for a regulated financial institution. We're repositioning ourselves to leverage our own internal cloud capabilities as we go forward. As the technology and the service providers provide capability we feel meet our needs, we will consider cloud down the road."

Although some might call the bank's virtualized and easily scaled server farms a private cloud already, one missing element is self-service provisioning that lets users log in and request their own computers. "We've purposely held back from self-service provisioning, we really want to crawl, walk, run in that," Borendame says. The bank has built portals that its provisioning teams have begun using. "We want to make sure we get our processes straight and get our product mix right before we go to full self-service provisioning."

Borendame hopes to avoid making an inadvertent misstep along the way. "If somebody is self-service provisioning and they ask for more than they need in a material way, they can be wasting funds," he points out. "If they under-provision, they could cause themselves some interesting problems."

 

PNC TURNS TO WATER CHILLERS

Pittsburgh-based PNC has also found the main opportunity to reduce data center power consumption lies in improving the technology used for cooling the facilities.

In its data centers in Pittsburgh and Cleveland, the $271 billion-assets bank is moving toward central chilled water plant cooling which, it is hoped, will reduce the cost of cooling by 40% to 60%. "The industry judges how well it's doing based on kilowatts per ton, how many watts do I consume to produce a ton of cooling," says Paul Fusan, vice president of the corporate realty group at PNC. The bank is moving from two kilowatts per ton down to one kilowatt per ton.

The traditional means of cooling older data centers has been to run many independent air conditioning units - for instance, the Pittsburgh data center had about 50 self-contained operating units, "which was great for redundancy," Fusan says. "Back when we built this place in the early 90s, the cost of power was not a concern. Of course it is now. Central chilled water plants are giant air conditioners that take warm water in, take the heat out of that water and send that heat outside of the building, sending the chilled water back into the building to gather more heat." Pumps move the chilled water through the data center through air conditioners that, instead of having compressors, have only a chilled water coil and a fan.

PNC's Cincinnati data center already had a central chilled water plant when the bank acquired it. "That's why we feel comfortable making these predictions - we have a model we're running right now, we know what the kilowatts per ton is there, so we're feeling optimistic," Fusan says.

The bank is in the process of installing four new chillers in the Pittsburgh facility that will be tested in August and go live in October. In the Cleveland data center, a central chilled water plant has already been installed and is being expanded to cover the entire data center. It will go online in the second quarter of 2013. "We're in the process of migrating our loads off those little unitary systems and onto the central plant, but it's a process," Fusan says. "There's a lot of risk management involved."

The bank does not use air-side economizers, the cooling devices Deutsche Bank pioneered last year that use outside air to cool data centers, because of concerns about air quality, Fusan says. "Whatever's outside is coming in," he points out. "Just a month ago there was a building fire over the hill from here, and had we been on air-side economizers at the time it would have been a disaster." PNC's new equipment does use outside air to cool the data center, but not directly. "When outside temperatures are cold enough, we run our outside chilling towers to drive our condenser water temperature down, then we run that through a heat exchanger instead of a chiller, and we shut the chillers off. That's called a wet-side economizer, and we're not now susceptible to what may be on the outside of our data centers."

Fusan's group tests such ideas in a Pittsburgh test lab and builds the equipment themselves. "We have a double thermal exchange wheel and we move tremendous amounts of air from outside the wheel and inside the wheel," Fusan explains. "The two air streams never connect; the energy is absorbed by the wheel on one side and released by the same wheel on the other side of the air stream. We're producing all our cooling by running nothing more than fans and the dry wheels. We think that will reduce our costs in that area up to 75%."

PNC is also raising the temperatures of its data centers to reduce cooling costs. "When you look at the operating spec on most electronic equipment, we're babying these things way too much," Fusan says. "The days when they had to have 68 degrees or they were unhappy are gone."

The bank runs a segregated environment in its data centers, with servers in enclosed cabinets with chimneys that go to the ceiling. There is no "hot aisle," the aisles in most data centers to which the servers exhaust their hot air - the heat radiated by the servers is sent out of the room.

PNC has been testing servers to determine their energy efficiency. "We've created our own unit of work, we took servers and bench-tested them ourselves," Fusan says. "We measure the heat produced and power consumed, and put together a profile of the efficiencies of server manufacturers." He would not reveal how various products have fared in such tests. "As elementary as it was, it amazed me to see how differently they performed, I thought they'd all be in the same ball park now that you have VMware. Some servers running at 20% load use as much power as those running a 100% load."

Mainframes are generally more efficient for the amount of work produced, Fusan notes. "The mainframe industry has focused on energy savings and efficiency," he says. "The server world is about 15 years behind."

As for cloud computing, PNC is in wait and see mode, according to Tom Gooch, senior vice president and director of technology. "We don't leverage cloud in a big way at all," he says. "Part of the challenge is, how do you leverage that with the applications you have."

 

DEUTSCHE BANK DRAMATICALLY REDUCES ENERGY USE

Deutsche Bank has even more ambitious goals: it aims to reduce its data center power consumption to one-fifth of its current rate. A combination of higher air temperatures, air-side economizers and cloud computing initiatives are making this possible.

One of its New York City area data centers, which the bank calls its Eco Data Center, is cooled by passing fresh air across a giant sponge to add moisture, which cools the air by 25 to 30 degrees. Even if the outside temperature is 105 degrees Fahrenheit, the air can be cooled to 80 degrees without traditional air conditioning.

"It's been working fine," says Andrew Stokes, chief scientist of Deutsche Bank's global technology division. To those who question using outside air to cool a data center, he says, "Would they be comfortable using a laptop outside on a humid summer's day? Would they be comfortable using a PC in their home in a hot, dusty basement? Somehow we think of the x86 motherboards and servers inside data centers as particularly sensitive. But in a way, the computer components inside the data center are more hardened than the ones in laptops, because they're designed for enterprise reliability. We have no issues with pollution, even though we're in a city and near a waterfront - we have to think about salts and corrosives from the sea as well."

Air quality concerns fall into two categories, in his view: particulate matter such as pollen, soot, pollution from trucks, dust and dirt; and gases, such as ozone, sulfur dioxide, and nitrogen dioxide. "If these gases remain in the air, at levels safe for humans, they're benign to servers as well," he says. "Many people walk outside in the city, breathing these things with relatively minor long-term consequences."

More harmful is letting moisture condense onto the servers, which can damage them. "If you completely mismanage your data center, for example not controlling your humidity limits, and you have a big squall front that comes through, and your server room is at all colder than outside, then you have the possibility of condensing moisture onto your servers," he says. "As well as the problems of water and electronics, this condensation can convert the gas pollutants into weak acids inside the servers."

The bank has been testing the effect of higher temperatures and air pressure on data center equipment. Stokes' team recently did a calibration exercise in its air-cooled data center with over 100 critical applications running. "We ran that data center from below 60 degrees Fahrenheit to above 90 degrees Fahrenheit in a single day," he says. The team also reduced the static supply air pressure from normal levels to a quarter of normal levels, and ran a reference set of servers from idle to 100% CPU load, monitoring power consumption under all those scenarios.

"The reason we did that is because of the usual fear, uncertainty, and doubt in corporate data center land about what happens if you increase the temperature," Stokes says. "You have people say if you raise the temperature from 75 degrees to 80 degrees, you're going to massively increase the fan activity, the servers will start breaking more frequently, and you're going to have tremendous problems. What we found was that if you look at the CPU curves from relatively cold to really hot inside this data center, all the curves sit on top of each other. There's almost no difference between hot and cold, and high static pressure to low static pressure inside the space." There was less than 5% difference in power consumption between running at 90 degrees Fahrenheit to running at 60 degrees Fahrenheit.

Even old mainframes, tape decks, and such can still run towards 80 degrees Farenheit, to 60-65% humidity, Stokes says. "If data center operators are running their data centers super cold, like 68 degrees, because they think the assets need that kind of air, then I would suggest looking again at the operating parameters of those assets."

"Facility managers instinctively run much too cautious temperature and humidity ranges inside data centers," Stokes concludes. He acknowledges the presence of old equipment that's more vulnerable to extreme climates. "Quite a few data centers have old tape decks, disk arrays, and other equipment that wouldn't like to be fed 90 degrees and 80% humid air on a consistent basis," he says. "But when you start building these more modular, modern cloud data centers, they can handle much higher temperatures and humidity ranges. Which means that in a place like Philadelphia, Chicago, or New York City, you can run on close to 100% free cooling."

Deutsche Bank runs its Eco Data Center up to 85 degrees with a range of 20% to 80% humidity. "Even with the conditions we've had in New York City through the past year, we've had 108-degree scorching hot record days in Central Park, we've had a hurricane come through this area, we've had massive deluges, we've had dry spells, but we've had 90% achievable free cooling," he says. "Using the lessons we are developing from our calibration experiments, we believe we could have achieved close to 97%." The bank has gone from an average power utilization effectiveness rate of 2 to 1.2 in this facility.

For the hapless maintenance engineer who has to work in the 85-degree data center, Stokes admits that, "Yes, it can get uncomfortable on the hot side averaging 100 to 105 degrees, and we can go to about 120 degrees maximum. We take precautions to look after our staff that do any extended work inside there. If you're just going in to replace one fan or power supply, and you're in that space for 10 minutes, it's like stepping into Las Vegas. It's uncomfortable, but you're only in there for 10 minutes and it's a dry heat." For those working in the hot side of the data center for longer periods, the bank has implemented a "bring your own floor tile" solution. Employees can open a cold-air duct and direct the air toward them.

"When we started to recognize the increasing importance of sustainability and energy conservation, we started to think about how we control these systems, stop unnecessary wastage and drive more efficiency across the whole estate," Stokes says. For example, in another facility, the bank installed custom computer room air conditioning manager software to reduce over-cooling.

Deutsche Bank is also using private cloud computing in its data centers, including infrastructure-as-a-service, platform-as-a-service, self-serve portals, elasticity, resource aggregation, and virtualization. "All those things are creating a capability for us to drive business growth and competitiveness in our marketplace out of less physical consumption of assets," Stokes says. "We've reduced our physical servers in the last three years by about 10%, despite the fact that 2009 to 2012 was a period of substantial business due to increased volatility during the financial crisis."

Public cloud computing is another question. "External cloud service providers increasingly have to understand how to address their products towards enterprises in highly regulated environments," Stokes says. "External cloud until now has been primarily focused on SMEs, academics and less critical use-cases. Through groups like the Open Data Center Alliance, the TM Forum and the Cloud Security Alliance, there is an increasing focus from the buy side of this marketplace on critical attributes to assure that services can be run safely and securely on these cloud platforms. The opportunity for the industry to evolve into this true hybrid cloud environment is becoming bigger each year."

 

ING'S HYBRID DATA CENTER

ING is drastically reducing its data center footprint by going to a hybrid and co-hosted cloud environment. "We've come to the realization that building out our own data centers is a thing of the past, it's a waste of time and energy to build out the brick and mortar of the data center," says Steve Van Wyk, CIO of ING. The Amsterdam-based bank recently teamed up with HP and Colt to build a shared data center into which it will move its existing cloud computing environment. Colt is building the facility and HP will provide technology and services. ING is the anchor tenant; others will be invited to join.

Van Wyk hopes the new data center will become a financial services cloud hub for private, hybrid and public cloud services.

To prepare for this shared cloud environment, ING has been making careful technology decisions. "All banks, including ING, have legacy environments where they're a victim of decisions that have been made in the past - things that don't integrate so well together and old core banking platforms," he says. "As part of the evolution toward cloud, you need to become extremely disciplined at defining standards and technology stacks. You pick those stacks across different providers; you don't want to be solely baked to one provider."

Standards like the Banking Industry Architecture Network that ING spearheads are also helpful. "As you think about the long-term future and the services being delivered to the cloud, standards like BIAN start breaking down how these service components integrate with each other across the industry," Van Wyk says. "Then you can start seeing banks like ING buying components out of the cloud from different providers and they fit together because the standards are already set. You eliminate the integration and are able to provide plug and play services out of the cloud. This is utopia. But this is what will be truly cloud enabling in the future for the banking industry."

Cloud computing is an evolution that takes place across people, process and technology, Van Wyk says. "I can't emphasize enough the people and process aspect of this. We too often talk about this as a technology play, but to me the more challenging elements are the people and process." In the beginning, the bank virtualized a lot of elements, but was still taking months to get new environments set up in the data center due to old processes.

"Our people needed to learn new skill sets with regard to how to take advantage of them, both the people deploying the technology and the people consuming the technology," he says.

"In the past, people would physically unbox the physical devices, burn on the operating system, burn on the complementary supporting environment, and connect to the network," Van Wyk says. "Today, most of that can be done in matter of minutes or hours by clicking through some of the tools like Vblock from the VCE (Virtual Computing Environment Company, a partnership of VMware, Cisco and EMC) to deploy those in a virtual way to our development teams. That takes a whole different skillset."

Eventually more IT will be provided in a cloud service, he believes, such that banks will be buying most back end services as a service. "But the idea of putting everything on the public could may be a stretch," he says. "There will be certain things you'll always want to keep close."

For reprint and licensing requests for this article, click here.
Bank technology
MORE FROM AMERICAN BANKER