Subscribe
About

Coping with rocketing electricity costs

A broad spectrum of best practices should be adopted for enhancing energy efficiency.
Philip Hampton
By Philip Hampton, CTO at Powermode.
Johannesburg, 18 Sep 2008

Rising electricity costs, combined with the seasonal increase in the use of air conditioners in summer, will have a significant impact on data centre budgets.

Is it possible to reduce power consumption in the data centre without compromising performance or the availability of critical components?

It's a question that's being asked more often by most medium-to-large sized companies as they ready for the promised hikes in electricity costs.

The key issue is: how much energy can be saved and how much would it cost? Most importantly, how many of the changes need to be rushed into existence, and how many can be incorporated in regular technology upgrades - thus minimising capital expenditure?

In order to achieve a meaningful reduction in energy usage, it is necessary to adopt a broad spectrum of best practices for enhancing energy efficiency, covering everything from facility lighting to cooling system design.

Encouragingly, in the US, the Environmental Protection Agency reports that if energy conservation 'best practices' are introduced immediately, data centre energy consumption can be reduced by 50%.

Is 50% a realistic target?

It is, particularly as it's been demonstrated that reductions in energy consumption at the IT equipment level can cascade across all supporting systems, adding significantly to savings.

Energy use

Energy conservation efforts must begin with a clear understanding of data centre energy consumption patterns and an analysis of how energy is used within the facility.

Energy use is categorised as either 'demand side' or 'supply side' orientated.

Demand-side systems are the processors, server power supplies, server components, storage and communication equipment and other IT systems that support the business.

Supply-side systems support the demand side and include the UPS, power distribution, cooling, lighting and building switchgear.

Perhaps surprisingly, demand side and supply side energy consumption are roughly equal, with some facilities leaning more towards the demand side - but not by more than a percentage point or two.

Importantly, reductions in demand-side energy use can often influence the supply side. For example, a 1-watt reduction at the server component level can result in an almost 2-watt additional saving in the power supply, power distribution system, UPS system, cooling system and building switchgear - nearly three watts in total.

Best practices

One of the first steps to take towards realising energy saving goals is to swap energy-inefficient processors and power supplies for high-efficiency components.

The typical processors in use today consume around 90 watts of electricity per hour. The latest low-power, lower voltage versions consume - on average - 30 watts less. As with processors, many of the server power supplies in use today are not capable of delivering the levels of efficiency that the latest models can.

It has been estimated that an 'un-optimised' data centre uses power supplies that average around 75% efficiency across a mix of servers that range from five-years-old to new.

The latest power supplies deliver efficiencies of over 90%. Use of these power supplies can reduce power consumption within the data centre by around 10% to 12%.

It should be noted that some power supplies perform more efficiently at partial loads than others. This is particularly important in devices with redundant power supplies where power supply usage can average less than 30%.

In line with this, it's important to size power supplies closer to the actual load - rather than theoretical maximum load conditions that may rarely occur.

Making these changes will help create a solid platform from which to launch other energy optimising strategies - such as the introduction of power management software.

This should be seen as the second step in the energy conservation process and is key for data centres that have large differences between peak and average usage rates. Power management can save between 8% and 10% of an un-optimised data centre load.

Another feature of the strategy should be server virtualisation. This is a technology that can play a vital role in optimising the data centre for efficiency, performance and manageability.

Virtualisation is able to make a single server appear to function as multiple logical servers. It therefore represents a significant step towards reducing the amount of hardware in the data centre - and also within the corporate infrastructure - and consequently, the power usage of these devices.

Implementing virtualisation technologies can provide an incremental 8% to 10% reduction in total power consumption for a facility in which only 25% of its servers are virtualised.

Supply side

Moving to the supply side, the most effective energy saving strategies involve the monitoring and optimising of cooling systems throughout the building - particularly data centre air conditioning.

Best practices include hot-aisle/cold-aisle rack arrangements as well as sealing gaps in floors, using blanking panels in open spaces in racks and avoiding the mixing of hot and cold air.

Use economisers where appropriate to allow outside air to be used to support data centre cooling during winter.

Computational fluid dynamics (CFD) systems can be used to identify inefficiencies and optimise airflow. This can often result in efficiency improvements of between 5% and 8%.

Other strategies include the consolidation of data storage from direct attached storage to network attached storage and the reorganisation of data so less frequently used data is on slower archival drives that consume less power.

Through these seemingly simple strategies, users can transform a power-hungry data centre into a modern, energy efficient facility.

* Philip Hampton is CTO at Powermode.

Share