Reminiscing Past and Predicting Future Data Center Trends

By: Kevin O’Brien, President, Mission Critical Construction Services, EEC

EECObrien3

After nearly 30 years in the data center industry, it’s interesting to see how the advent of new technologies and current events have impacted the data center’s evolution and shaped the structures we see all over the world today.  Here’s a blast from the past and a look into the future of data center facilities:

The 1980s and 1990s

I began my career in the data center industry in the ‘80s as a Facilities Manager for a large financial services company headquartered in New York.  At this time, it was commonplace that data center facilities be located within the same building as trading and office spaces.

1988 marked the year that we began construction of our very first remote site facility outside of New York, solely dedicated to data and telecommunications.  Prior to our data center, the site was home to an ITT communication HUB in New Jersey that functioned as the headquarters for the “Hot Line” link between Washington, DC and Moscow.  During this time, the majority of functions were analog and having the remote site gave us a way to increase the redundancy and reliability of electrical and mechanical systems.  Back then there was no Tier certification system, but through this we were able to reach what is now considered a Tier II standard for electrical systems and additionally achieved the equivalent of 2N on the UPS.  In the late ‘80s, load-in data centers ranged from a mere 35-50 watt maximum per square foot.

To meet the rise in demand for fiber and reliable computers, many companies began choosing remote sites during the ‘90s.  In 1989, coming as no surprise to industry professionals, the 7×24 exchange started publishing articles sharing the common experience on seeking ways to improve overall reliability throughout mission-critical facilities.  Stemming from this in the early ‘90s, the Uptime Institute was created, bringing with it the widely used Tier certifications.

The Dotcom Era

The next noteworthy paradigm shift occurred during the dotcom boom.  To meet exploding demand, companies began constructing data centers containing more than 100,000 square feet filled with racks equating to roughly 50-75 watts per square foot at full capacity.  Companies were able to build anywhere in the world thanks to the rapid proliferation of fiber as well as the economic upswing.  With unbridled optimism, companies overbuilt only to watch the stock market plummet following the events of September 11.  However, after a few years, the predication of demand for servers was finally brought to fruition.

Following the dotcom collapse, the Sarbanes Oxley (SOX) law was developed in 2002, requiring data centers that supported financial trading to locate their facilities within a designated number of fiber miles from Wall Street.  In addition to its proximity standards, SOX also required the construction of a separate, synchronous data center for redundancy, resulting in construction growth throughout New Jersey.  By this point, sites began to encroach upon the 100 watt per square foot barrier, while many reached Tier III & IV status.  Square footage and costs continued to rise as additional space was needed to support more robust infrastructure.

Demand for Density and Redundancy

Alongside the need for greater density and redundancy, this era saw the development of the building square foot ratio of raised floor to infrastructure.  For example, 100,000 square feet of raised floor area (also known as white space) at 100 watts per square foot in a Tier III configuration would have a ratio of 1-to-1.  If at this time the density of the space increased to 150 watts per square foot, then the ratio would also increase to 1-to-1.5.  In other words, one would need 150,000 square feet of space to support the infrastructure with the same 100,000 square feet of raised floor.  This type of infrastructure was developed to meet the need of IT load support that – in most cases – never actually materialized.  As density increased, the industry began utilizing a proper measurement of kilowatt (kW) per rack instead of watts per square foot for a more accurate measurement.


More Power/More Efficiency

Once the new measurement model took hold and the industry began rating in kW or cost for kW, people began to clearly understand how much power was actually being used and at what cost.  This shift triggered an energy rating system known as Power Utilization Effectiveness (PUE), creating a universal measuring system of the efficient or inefficient use of energy throughout a facility.  As a result, data center managers were now held accountable for energy consumption causing them to seek free cooling and higher levels of efficiency, despite the fact that densities were increasing from 3 to 4 kW per rack to 16 to 25 kW per rack. In response, some facilities such as Yahoo! completely eliminated mechanical cooling (chillers) in favor of utilizing outside air to provide “free cooling” through a simple structure called the “chicken coop”.  While this worked well for Yahoo! and other similar data centers, it was not an ideal solution for most enterprise data centers and “hot aisle, cold aisle” became the norm.  Through this practice, operators were able to isolate the load, resulting in higher levels of energy efficiency and the trend of elevating temperature inside the data call.  The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) helped clarify the raised temperatures of server inlets, but the battle still remains to determine which PUE is smaller.

The shift towards energy efficiency and “greener” facilities propelled industry leaders to develop more innovative data center technologies including high efficiency chillers, adiabatic cooling, fuel cells, solar power and 380V Direct Current (DC).  These recent developments may foreshadow the future elimination of multiple Alternating Current (AC)/DC conversion and mechanical compressors, making way for a more distributed generation of clean power.  Likewise, alternative energy sources are also growing in cost efficiency.  One example are Bloom Box fuel cells from Bloom Energy.

For the the past 26 years, I’ve had a front row seat to the continuous evolution of the data center and with each change, I’ve had to adapt my environment to new design and construction standards.  Whether it’s an Internet or cloud data center, big open box space or modularized data halls and pods, I’m excited to see all of the growth and change that the next 10 years will bring.