Lessons from a CRAC Dealer

Datacenter with underfloor air distribution.

Unless you have actually been inside a large datacenter it is difficult to understand the scale of their design requirements. I remember my first foray into the world of CRACs (Computer Room Air Conditioners) was a project for one of the many datacenters located in Arlington, VA. I had never seen anything like it before: rows and rows of server cabinets, power distribution units, and CRACS covering the entire floor of an office building. It looked like a set out of a movie. From that point on I was hooked on designing high sensible load systems.

It took me several years and over a hundred projects to fully understand the many different ways you can design a datacenter. The most fundamental lesson in datacenter design is the realization that most datacenter equipment will be replaced several times throughout the facility’s lifecycle. The servers in the datacenter you are designing today could be replaced before the CRAC units you specified are even manufactured (it has happened to me more times than I can count). With each year, Moore’s Law results in higher processing speeds and higher heat loads. It can be extremely challenging to design air conditioning systems for a mission critical facility when you know the sensible load grows every year. Datacenters are living, constantly evolving creatures.

The first step in understanding datacenters is exploring their design criteria. Before we get started down the road of high sensible load, it is important to remember that datacenters require people to provide oversight and management of the facility, but this occupancy isn’t typically permanent. What we usually see are the environmental conditions required by the servers driving the HVAC equipment needs. Keep in mind, however, that in smaller datacenters occupancy can influence ventilation air quantity.

It is easy to understand that a datacenter built to manage your business’s file storage servers doesn’t need to be as robustly designed as a bank’s credit card processing servers, but where do we go from there? Just like we have systems to classify tornadoes and hurricanes, a consortium of manufacturers saw the potential for confusion, so they worked together to develop a set of standardized conditions to classify the requirements of datacenters.  Datacenters range from Tier 1, which are just basic server rooms, to Tier 4, which are designed to host mission critical computer systems with fully redundant subsystems and biometric security systems. The four levels are defined and copyrighted by the Uptime Institute, a Santa Fe based think tank. Below are summaries of each of the Tiered requirements.

Tier 1 Requirements:

  • Single non-redundant distribution path serving the IT equipment
  • Non-redundant capacity components
  • Basic site infrastructure guaranteeing 99.671% availability

Tier 2 Requirements:

  • Fulfills all Tier 1 requirements
  • Redundant site infrastructure capacity components guaranteeing 99.741% availability

Tier 3 Requirements:

  • Fulfills all Tier 1 & Tier 2 requirements
  • Multiple independent distribution paths serving the IT equipment
  • All IT equipment must be dual-powered and fully compatible with the topology of a site’s architecture
  • Concurrently maintainable site infrastructure guaranteeing 99.982% availability

Tier 4 Requirements:

  • Fulfills all Tier 1, Tier 2 and Tier 3 requirements
  • All cooling equipment is independently dual-powered, including chillers and Heating, Ventilating and Air Conditioning (HVAC) systems
  • Fault tolerant site infrastructure with electrical power storage and distribution facilities guaranteeing 99.995% availability

A little daunting, isn’t it? Working with a design requirement of 99.995% availability can make even the best engineer lose sleep. Unsurprisingly, the Uptime Institute has concluded after extensive experimental research that most computer rooms cannot properly handle their current computer equipment heat loads, let alone properly cool higher-density computer equipment such as blade servers. After measuring cooling conditions in 19 computer rooms comprising over 200,000 square feet they found that 10% of CRACs had failed but weren’t alarming. The Uptime Institute also discovered that most datacenters  had on average 2.6 times more cooling capacity than they required but still had significant hotspots. The reason why? Only 28% of the available cold air supply was directly cooling computer room equipment, the other 72% wasn’t being utilized at all. I cannot stress the importance of understanding the distribution of air in your datacenter – if you are up for it computational fluid dynamics simulations can be worth their weight in gold.

Referencing the Uptime Institute’s datacenter design criteria before a project starts can help manage your client’s expectations and costs. Most datacenter designs I have participated in along the Gulf Coast have ranged from Tier 1 to Tier 2, when the client thought what they actually needed was a Tier 3 or Tier 4 facility. McNellage & Associates can help you select and design the proper CRAC equipment for the Tier level your client wants. Pay attention over the coming months as we expand our series on datacenter design – we will be exploring the many different ways you can cool and deliver conditioned air for these specialized facilities.

McNellage & Associates are absolutely passionate about innovative data center HVAC equipment. We would love to work with you and your team on your next project! Click here to contact us and learn how to make your next CRAC project accomodate your design requirements AND budget.

Post comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Fatal error: Call to undefined function mcsp_html() in D:\home\\wwwroot\wp-content\themes\wisebusiness\comments.php on line 78