By Naveed Siraj
Cloud-based services are spreading exponentially with the explosion of public and private cloud services, social media, cross-device data syncing and online storage - to name but a few. Companies and service providers are heavily investing in high-performance, efficient data centers to cater for ever-increasing demand and the power to stay ahead of the market.
It’s no surprise that running data centers of this magnitude is a costly operation and as such, a great deal of importance is placed on efficiency in data centers to ensure a return on the investment. Trust and reputation are critical in cloud environments and consumers and businesses alike require the utmost in service and security.
Many factors affect our ability as an industry to streamline and minimize the operational costs associated with running effective data centers that are less energy consuming but with higher performance. The largest issue is one of methodology and the progression of technology. Ultimately, we are only able to work within today’s technical capabilities, which do not always satisfy industry demand. For instance, HPC has significant business potential, yet due to the high amount of energy needed to run such systems it is not a sustainable or cost effective approach today. However, there are many technologies in the pipeline that will enhance our future technical capabilities and rapidly solve today’s IT challenges when available.
As an engineering company, Intel is constantly making technological advances on a micro level with the intent of reducing costs and improving efficiencies on a macro scale, that are particularly beneficial to businesses. Intel’s technology today can deliver more performance per watt than in any time in the past. By enabling our partners to capitalize on lower operational costs will mean that they will become greener, more profitable and their investment will be returned at a much higher rate and ROI will be realized in shorter time, with increased revenue.
In order to improve data center efficiency and ensure that businesses cloud offering or storage solution investments are profitable, we need to consider three key steps:
· Modernizing and refreshing technology;
· Utilizing management systems that measure and account power efficiently (Node Manager and Data Center Manager [NM/DCM]) and;
· Maximizing the operational efficiency in terms of power and thermal profile of each server (Power Thermal Aware Scheduling [PTAS]).
Modernize and Refresh
A holistic and integrated approach to data center infrastructure management can lower operational costs by up to 20 percent.
Power Thermal Aware Scheduling (PTAS) is a new concept for Intel that manages metrics at the workload level such as CPU consumption, memory consumption, input and output levels. Traditionally this data would be separately aggregated into a capacity planner
along with building management data, if it was done at all. PTAS takes all of this into account and catches metrics at the server level such as input and output temperature. It allows users to run analytics and make decisions to migrate workloads that are creating “hot spots” to cooler areas of a data center, for example.
Intel has pushed the boundaries by trialing the connection of PTAS to building management systems at a data center in India and Taiwan – so it can interact with air conditioning units to efficiently cool specific areas when needed – rather than the wasteful use of all units at the same time.
Management
Intel’s Power Node and Data Centre Managers allow control and monitoring of server power at the rack and data center level which means there is transparency in where energy is going – and with transparency comes the ability to control and reduce costs.
From the system level energy reports, you can limit individual server power consumption, limit total rack power draw to increase productivity per rack, and limit aggregated row power draw. This is very useful if there is an unplanned power event where the Data Center is running on power generators and fueled by on site fuel. This condition means there is a finite amount of fuel and battery power on site and using a method to “tell” the servers that there is a building emergency and power is in a critical state means the servers can “throttle” back enough to maintain the customer SLA, but intend the on-site fuel. This is similar to your notebook PC. When you unplug the power from the wall the notebook changes state because it knows it is on battery power.
Maximizing Operational Efficiency
The costs involved in cooling a data center are a significant outgoing cost – so operating at a higher temperature will help to reduce these costs. Data centers 10+ years ago were typically cooled to 18-21 degrees Celsius for a number of reasons – including warranty on servers stipulating this temperature, this is no longer the case with warranties now set closer to 35 degrees Celsius
. For every one degree rise in temperature, there is an estimate four percent operational saving, which shows how
Intel’s High Temperature Operation solution can dramatically cut costs.
To put this into perspective, if data centers were to increase operational temperatures by five degrees Celsius, there could be a total annual power saving of $2.16 billion globally. Achieving these significant results requires innovative solutions in modernizing, managing and maximizing both the infrastructure and cooling methods of data centers.
There are a number of hardware and software products already on the market to achieve this, with significant development in technologies such as PTAS that are paving the way to more efficiency. By cutting operational costs this way, Asian businesses can see a stark return on their investment in data centers much quicker than the way in which most still operate. Only when industries stop and take a look at how their data center is operating, rather than just letting it run the way it always has, will they see the rewards and benefits of a more sustainable and cost-efficient operation.
The writer is Country Manager Intel Pakistan.