The Data Center Diet

The data center is typically where organizations spend the bulk of their technology and resource investments, so quite naturally, operating one efficiently is a top concern for chief information officers and their staffs. The management complexities range from integration, availability and reliability issues to information management best practices, security concerns, risk management and data protection. Employees are putting more pressure on information technology to deliver new and different services ranging from device flexibility to new applications. There are five important practices to make data centers more efficient.

1. Commoditize your infrastructure. To get the most out of your data center, you should consider commoditizing based on three simple principles: stabilize, standardize and optimize. This can help IT staffs prevent or quickly resolve issues without significant time and training investment. By following these principles, enterprises can commoditize their infrastructure and at the same time, prepare for future growth.

I don't view infrastructure as a commodity. However, organizations can reap significant benefits by running things like a commodity. Essentially this translates to "common when possible, custom where it counts." This philosophy is critical to reduce costs because supporting a customized infrastructure is unsustainable when trying to manage rapidly growing amounts of data. Instead, organizations should embrace a model that is more sustainable for the future that is about delivering on the needs of the business rather than the underlying technology. In this model, IT will be delivered as a service ensuring it runs efficiently, with stable data centers that are standardized and optimized so that capacity can be managed across the infrastructure. I believe this approach will yield better results, while also driving down costs.

2. Embrace virtualization. A virtualized infrastructure puts less emphasis on the traditional server (where things run and how they run) and suggests a new model where "units of compute" are the metric. In essence, the server is now an app. This challenges the traditional boundaries of what runs where, and how you write to those platforms. We view the resulting converged data center as the server. At Wells Fargo, we're focused on driving density and computing power within our data centers. This is being accomplished through thoughtful virtualization efforts. These efforts are helping us vastly reduce our hardware requirements while also improving our utilization rates and our capacity needs network wide. These factors alone are driving efficiency. And, as a byproduct of this investment, we've also done three things: reduced space requirements and costs in terms of power consumption and cooling; provisioned servers faster to help speed up response times to the business; and gained future cost savings through technical advances like cloud services.

Virtualizing our data centers will continue to be a journey at Wells Fargo and we will maximize this effort where it makes sense.

3. Become "cloud-like." While I can see many benefits in deploying cloud computing, I think CIOs should be far more interested building infrastructure environments that are cloud-like. In some industries like financial services, data should not be in the public arena. In my opinion, public clouds are not mature enough for sensitive customer data. But, there are many different things we can learn from the public cloud about service delivery. By being cloud-like, we can apply the principles of delivering IT as a service without the added security and privacy risks of having this data "living" off-site. To me, a cloud-like approach means we can simplify our infrastructure and keep it in the background using middleware solutions to connect it across the enterprise. Then, we can drive more to the front of the operation and innovate there, close to the user, as opposed to within the infrastructure. Thus, "simple" becomes easy to manage and support. It also allows IT organizations to work more efficiently, while building more customer solutions. The benefit of being cloud-like is spending less time ensuring the reliability and security of the infrastructure, and more time innovating for customers.

4. Achieve operational excellence. By stabilizing, standardizing and optimizing the data center, the infrastructure becomes "mainframelike," free of custom development and unnecessary complexity. With this approach, IT can focus on availability, reliability and maximizing capacity. A mainframelike approach means data centers are moving to an integrated stack, which naturally means improved integration, easier support and more efficient management. These converged data centers will also mean network, storage and server teams will no longer function as separate, siloed teams. While this will initially require a cultural change, it means IT will be geared to deliver for the business. This is a thoughtful and mindful evolution. This is not a big bang and you're there one day.

5. Be green. Evolving to a sustainable, earth-friendly organization is also on the priority list for many CIOs. And, since we all know the greenest energy is energy that isn't used, often a green IT strategy starts and ends by reducing power used by the data center. One simple way to effect this is to raise the operating temperature of your footprint. This not only reduces energy consumption, it has the added benefit of reducing costs, and increasing efficiencies by stabilizing and optimizing infrastructure you have versus creating new.

 

Scott Dillon is executive vice president and head of technology infrastructure services at Wells Fargo.

For reprint and licensing requests for this article, click here.
Bank technology
MORE FROM AMERICAN BANKER