Stulz and Next Generation Data


Keeping It Cool In A High Density World

Cooling, together with power and energy management, is critical to data centre resilience and uptime as well as in determining a facility’s overall power usage effectiveness (PUE). With cooling typically accounting for 40 % or more of a data centre’s total energy bill, the more that can be done to optimise and reduce cooling the better from cost, environmental and legislative perspectives. 

There are various options and alternatives available - some only within the grasp of new build rather than legacy facilities – including the harnessing of climatically cooler locations that favour direct air and evaporative techniques; installing intelligent predictive cooling systems; using water, liquid or nano-cooling technologies; along with prerequisite aisle containment techniques - hot or cold.

A fundamental step towards improving cooling efficiency is to move away from perimeter cooling whereby CRACS circulate cold air via raised floor plenum. Instead, adopt a hot aisle/cold aisle configuration with racks aligned in rows and server exhausts facing one another to create the hot aisle.  Combined with the separation of hot and cold air streams through either hot (HACS) or cold (CACS) aisle containment this approach will enable cooling systems to be set to higher temperatures, saving more energy while still providing safe operating temperatures for IT equipment. 

Often used in combination with traditional perimeter cooling, the CACS approach encloses the cold aisle to separate the hot and cold air streams allowing the rest of the data hall to become one single hot air plenum. The addition of cold aisle-end curtains or doors and ceilings will further reduce the mixing of air streams. 

While CACS is certainly a big improvement on cooling efficiency compared to the sole reliance on a perimeter only approach, there are inherent inefficiencies. This is due to the distance and pressures and therefore energy required for distributing sufficiently chilled air from the perimeter CRACS to the servers.

There are also limitations with CACS on rack cooling densities (over 6kW) due to the limitations of distributing sufficiently cold air through raised floors.  Higher densities require higher airflow and producing and driving larger volumes of cold air to compensate for these hotspots is costly.

For cooling higher density racks, HACS incorporating doors, ceilings, in row cooling systems and variable speed temperature controlled fans may provide a more efficient solution whilst ensuring server availability. This is because HACS is a self-contained ‘room within a room’ system that removes server exhaust air from the data hall at large and precisely targets cold air to specific racks or rows. 

By maintaining higher return temperatures HACS will also ensure cooling unit capacities are maximised whilst the power consumed (including by fans) is reduced. Apart from saving energy this will optimise the potential for free cooling days. HACS can also be deployed without impacting the existing cooling architecture - unlike CACS – as it does not affect the temperature of the entire computer room.

 

Post production images of the Drycoolers used at NGD.

Each Drycooler is capable of handling up to 1.2MW of heat  

There is a common misconception that running low density racks instead of higher density ones will be less costly when it comes to power and cooling but the reverse is actually the case. Running fewer high density racks than lower density ones will yield a lower total cost of ownership because they have far superior compute capabilities while using significantly less data centre resource: switchgear, UPS, power, cooling towers and pumps, chillers, lighting and so on.

While next generation servers are capable of operating at higher temperatures than previous solutions, the demand and necessity for intelligent, energy efficient cooling remains in the ascendancy. Among others, this is being driven by increased deployments of high density racks – some already pulling 60 kWs or more - for supporting users’ growing Cloud, Big Data and HPC compute and storage needs.

Faced with these challenges best practice dictates that data centre and facilities professionals will increasingly need to apply real-time Big Data analysis and monitoring techniques to the data centre itself - for optimising cooling systems plant and maintaining appropriate operating temperatures for IT assets, without fear of compromising performance and uptime.

Central to this and to maximising overall data centre energy efficiencies and PUE are integrated energy monitoring and management platforms, capable of integrating the building management system, PDUs and SCADA. Data centres have historically used disparate systems which are considerably less efficient. An advanced system will save thousands of pounds through reduced power costs and by minimising the environmental impact while helping to ensure maximum uptime through predictive maintenance.

 

Intelligent choices

Since its opening in 2009 Stulz UK has supplied the cooling systems at NGD’s mega data centre in South Wales, a Tier 3 facility offering both private and shared colocation data halls across 750,000 sq ft.

With up to 180 MW power available from a direct SuperGrid connection, the facility has proved increasingly popular with enterprise and service provider organisations operating high density environments such as those found in HPC and Cloud.     

Two years ago in response to increasing requirements from customers to further reduce PUE, NGD evaluated alternative solutions to the direct expansion systems originally supplied by Stulz. This  included an adiabatic system (IAC) and though demonstrating very energy efficient PUE figures it was rejected due to the major infrastructure works involved, large footprint/low cooling capacity, high water usage and treatment requirement, as well as on grounds of cost.

 

NGD (Next Generation Data) situated between Cardiff and Newport is Europes Largest datacentre and is the UK's leading secure data centre provider.

 

Stulz subsequently replaced the DX systems with its latest cooling solution which intelligently determines the optimal mode of operation according to the dictates of the external ambient conditions and data hall requirements. This solution enables the system to operate in free cooling mode for the majority of the year, only providing supplementary cooling in times of elevated external ambient conditions.

The scalable system design is based on the glycol free cooling GE system and features a unique pipework arrangement: a low loss header within the dry-cooler compound and individual flexible pipes for distributing to each individual CRAC unit. The pipes are double insulated and have both an inner and outer pipe. In the event that the inner pipe leaks it transfers into the outer pipe which has its own leak detection installed. The pipes can be made in any lengths therefore minimising joints and can be installed in a fraction of the time it would take for a traditional piping system.

The first GE system was installed and commissioned within five weeks and the annualised PUE figure achieved was 1.18. The cost of the system compared to the IAC was almost 60% lower and the ROI for the GE system was three years compared to six years for the IAC.

Thanks to this intelligent cooling solution NGD has been able to move in line with customer requirements, delivering solutions from a standard 4kW rack up to 60kW per rack and beyond with resilience at a minimum of N+20%.

 

To top