Key Data Center Stakeholders Offer Their Perspectives On PUE in the Mission-Critical Facility

By Dan Bodenski, Director of Strategic Solutions, Electronic Environments Corporation

Power Usbodenskiage Efficiency (PUE) is currently considered one of the most important metrics a data center team can utilize to assess a data center’s current and potential energy efficiency.  PUE is the term we use to define the ratio of total energy consumption throughout a data center including all fuels, divided by the total energy consumption of IT equipment.  This go-to metric was originally developed by the Green Grpueid Association in 2007, created as a way to definitively measure and track data center efficiency.  Since its inception, the PUE metric has expanded its usefulness outside of a simple end-user tool for operators.  Today, PUE is considered by many a Key Performance Indicator (KPI) of a mission-critical data center facility.

According to Green Grid, three separate levels exist for the measurement of PUE, each providing their own benefits and requirements:

  1. The first level, known as “basic” measurement, measures IT equipment energy at Uninterruptible Power Supply (UPS) output on a weekly or monthly basis;
  2. The second level, known as “intermediate” measurement, allows energy to be measured at the Power Distribution Unit (PDU) outputs;
  3. The third level, the most accurate measurement, requires a high level of technology coordination, data collection and human interaction. A great way for facilities to reach this level of accuracy is to install a PUE measuring device such as a kWh meter with help from an experienced firm.

Whether their roles focus on design and engineering, operations or C-level management, key stakeholders within the data center leverage PUE as a core determinant for evaluating and analyzing a facility’s effectiveness and potential.  When PUE data is used properly within a mission-critical environment, the results can justify added environmental enhancements and enable cost savings through increased energy efficiency as well as revenue growth from monetizing access server capacity.

So, how do the different players in the lifecycle of a data center really view PUE?  Designers, engineers, operators and executives all have their hands in different aspects of the facility, so it would stand to reason that each has their own approach to using PUE in order to fulfill their specific role.  Below, we’ll take an insider look into how each of these stakeholders leverages PUE to satisfy customer demands and create a more efficient, mission-critical environment.

Designers / Engineers

De sign and engineering teams are continually pushed to develop mechanical and electrical designs that will drive energy efficiency while simultaneously ensuring maximum uptime and enabling continued innovation.  This balance can be achieved by understanding and considering the PUE of a facility, which provides a transparent view into its energy consumption.

In some cases, taking advantage of the surrounding environment as well as documented, low risk strategies such as increasing the supply air temperature and/or chilled water temperatures can mean big savings.  To get the best results, design teams should adhere closely to Green Grid’s PUE definition of components during initial design and analysis while properly identifying source energy; this will ensure that PUE calculations presented in the initial design will match the ultimate results.

Operators

Today’s data center operations team are under serious pressure to reduce energy use within existing data centers; however, these solutions must fit within the framework of a live, operational facility. Management of real-time planning activities and ensuring maximum availability of critical infrastructure are at the top of operators’ responsibilities lists. Not far behind is PUE, which provides operations teams with a KPI to deliver and report upon to senior management on a regular basis. Through this deep understanding of a facility’s energy usage, operators justify new and effective ways of reducing power loss and saving energy.

Executives

C-level data center executives take a big-picture approach to data center energy effectiveness, and PUE plays an important role in influencing their overall strategies.  PUE represents approximately 8 to 15 percent of the Total Cost of Ownership (TCO), and requires regular monitoring and analysis because it is a KPI that executives often tout to corporate clients and potential third-party customers.

In order to have a successful, energy efficient mission-critical facility, these varying perspectives on PUE must be considered by the entire data center team, not just each stakeholder for their specific professional purpose. Through this combined effort and 360-degree approach, the full mission-critical team can ensure long-term facility success.

 

Data Centers Seeking Energy Efficiencies Have Options

By Ken Rapoport, CEO of Electronic Environments Corp.

Our advice to clients who engage us for assistance in building and retrofitting data center facilities for energy efficiency: consider the foundations upon which your data centers are built and the assets deployed inside them.  Reliability and energy efficiency are the overarching objectives.  This approach reinforces that the data center will perform to expectations in meeting the requirements of their business.

In scenarios where the client is building a new facility, the energy efficiencies offered by large cloud providers can be an attractive option to consider based on a number of factors.  For one, these providers can locate their facilities in geographic regions where the cost of energy is comparatively lower, for example in the northwest of the United States.  They can also leverage customized servers that are able to operate at higher temperatures and higher efficiencies.  Lastly, large cloud providers can take advantage of advanced scalability and uniformity capabilities.  The net result can mean levels of Power Usage Effectiveness (PUE) of 1.02 or 1.01 — a significant achievement.  However, a sizable number of businesses will not have these options, and therefore rarely achieve PUE levels of less than 2.0.

In order to reduce their PUE levels, EEC advises customers in several ways.  First, we conduct assessments and deploy advanced technologies — for example, energy-efficient mechanical systems that take advantage of free cooling.  The good news is that a number of powerful new technologies will deliver impressive returns and are available at comparatively low cost.  These include intelligent air distribution and management systems that can achieve energy usage reductions of between 20 and 40 percent in just two short years.

Another option that can deliver greater energy efficiencies is to retrofit your legacy data center technologies.  For example, if you’re operating a low-density data center, one that’s operating at 50 watts per square foot, you can deploy direct water-cooled racks or in-row cooling in zones in order to accommodate potential future zones of higher density servers.

For more information about the relationship between data center strategy and energy efficiency, download our free white paper, or view the EEC Google Hangout.

For more information about EEC, visit www.eecnet.com.

Electronic Environments Corporation Expands Mission Critical Construction Services Division and Welcomes Kevin O’Brien to its Executive Team

New innovations and rising demand for data center colocation facilities are driving the rapid growth of today’s data center construction market.  To address these emerging demands and technologies, businesses are looking to optimize and transform their data center operations.   Recent research from TechNavio predicts a Compound Annual Growth Rate (CAGR) rise of 21.99 percent in the global data center construction market from 2013 to 2018, while Markets and Markets forecasts the global data center networking market will grow from $12.49 billion in 2013 to $21.85 billion by 2018, a CAGR of 11.8 percent within the five-year period.

Leading mission critical facility management company Electronic Environments Corporation (EEC) recently announced the expansion of its Mission Critical Construction Services Division.  The division enables EEC customers to overcome data center and wireless infrastructure challenges and reach their Power Usage Effectiveness (PUE) and Service Level Agreement (SLA) objectives leveraging unique, comprehensive and integrated facility services, spanning:

  • Data center construction
  • Consulting
  • Design
  • Comprehensive assessments
  • Maintenance programs
  • Data center efficiency solutions

In addition to the expansion of its Mission Critical Construction Services Division, the organization also appointed a new Division President, Kevin O’Brien.  An industry veteran with over 30 years of engineering and construction experience, Mr. O’Brien brings to EEC in-depth expertise in preconstruction, estimating, procurement, construction and commissioning.   Over the last 15 years, the new Division President has solely focused on mission critical construction, serving a wide array of industries.  Throughout his professional tenure as Director of Mission Critical at Structure Tone and Gilbane, Inc, O’Brien was entrusted with over 13 million square feet of critical construction projects.  He also worked at Bear Stearns, a global investment bank and securities trading and brokerage firm.

EEC has been providing mission critical facility management and turnkey Mission Critical Lifecycle Services (MCLS) to data center and telecom sites across the U.S. for over 28 years.  For more information about Electronic Environments Corporation and its expanded Mission Critical Construction Services Division, visit www.eecnet.com.

For more information about how your company can get listed in the Data Center Discovery global directory of data center solution providers please email becca@datacenterdiscovery.com

10 Data Center Predictions for 2014

Data Center Discovery Logo on TopMy top ten predictions for the data center industry in 2014. In order from “extremely likely” to “I’m totally guessing here”.

Okay. Let’s dispense with a few sure bets first,

1.    Big Data will continue to be a challenge and an opportunity.

The past few years have seen dramatic growth in the amount of raw data produced and stored by corporations, individuals, governments and scientists. IBM estimates that a staggering 2.5 quintillion bytes of data are created each day around the world. All that data obviously has to be stored somewhere. Which points toward a continuing demand for most types of data center space and data storage devices.

For corporations, it’s not unusual for a single US business to have more than 100,000 gigabytes of stored data.  Warehousing these vast stores of data is relatively straightforward and becoming easier and cheaper as the cost of memory nears the vanishing point. The challenge lies in putting that data to work. The relational databases and desktop applications that processed and helped visualize data in the past are increasingly unsuitable for dealing with the scale of modern data hoards.

Simply analyzing massive volumes of data is a true challenge. But the problem goes deeper. The data is also arriving at increasing velocity and in an increasing variety of formats. Making good strategic use of Big Data requires that the analytics tools be capable of analyzing the vast volume of data in real time and across a variety of data types.

In 2013 investment in Big Data neared $1.4B USD. In 2014, the companies that bring analytics tools to market that successfully capture, analyze and visualize Big Data will see increased investment and demand for services.

2.    Cloud Computing will continue to grow.

All types of cloud computing solutions will see strong growth in 2014. However, growth in the area of cloud based data storage will be particularly strong.  Market research firm ABI forecasts that cloud based data storage will triple in volume in the next 5 years. ABI is expecting 4 billion personal accounts holding a whopping 3500 petabytes of data to be in place by 2018.

Further driving growth in cloud computing will be an increasing trend among enterprises, SMBs and startups to decide against owning and maintaining their own stack of silicon. Business will see that the costs associated with owning IT infrastructure as an impediment to the pursuit of their core mission. These firms will continue the rapidly adopt PaaS, SaaS and HaaS cloud solutions.

3.    Colocation and managed hosting will continue to grow.

Perceived security and reliability concerns will cause many companies to remain reluctant to make a full commitment to the cloud. However, these companies will still see the logic in deciding to forgo the expense of building, operating and maintaining the infrastructure needed for their own private data center. These companies will turn to hybrid cloud, colocation and hosting providers for their technology backbone.

Growth for colos in 2014 should be strong in all geographies. For example, a recent study by Research and Markets indicates an expected 16.5% CAGR for European colocation providers through 2016.

4.    Data Center Infrastructure Management (DCIM) Market to Grow

Data center operators will continue to look for strategies that allow them to squeeze as much efficiency out of their existing infrastructure as possible. Maximizing the use of existing electrical, mechanical and compute infrastructure requires careful planning, thoughtful deployment and a clear view of available resources. A number of DCIM technology platforms have demonstrated significant capability provide this vision and planning capability. As a result, the DCIM market has exploded in the past two years

In December, Gartner, an information technology research and advisory firm, stated that the market for DCIM products is already to more than 1B$.  Look for DCIM to continue its upward trend in 2014.

A little less obvious…

5.    A single provider for Colo, Cloud and Big Data Analytics

These first 3 predictions comprise what I see as a winning business model for the IT infrastructure outsourcing firm of the future. Companies will emerge that wrap all of these services together into a tidy package and own their customer’s IT relationship from physical layer through advanced data analytics.

A market leader will emerge that provides:

  • Colocation and managed services for those customers seeking a conservative approach to IT infrastructure outsourcing
  • Help those customers gradually transition to a hybrid cloud solution
  • Help customers transition from hybrid cloud to a fully functional cloud solution
  • Provide in-house Big Data analysis and consulting

Will companies like AWS that provide cloud and big data analytics move down the technology ladder to provide traditional colocation services? Will companies like Equinix that provide colocation and cloud services move up the technology ladder to provide big data analytics? 2014 should tell us.

6.    The Internet of Things (IoT) inches closer

As previously noted, Big Data will continue to be a focus for technology firms in 2014 and beyond. Perhaps the biggest data hoard that this tech will be tasked with taming is the deluge of data poised to arrive when nearly every object in the world is part of the Internet of Things and is actively recording and reporting data.

Gartner estimates that by 2020, 26 billion devices will be on the IoT.

The IoT will inch closer in 2014 with two key conferences:

These conferences will incubate ideas, demonstrate capabilities and establish common protocols for IoT tech.

As noted above, cascades of data lead directly to demand for data center space and boom times for data center designers and infrastructure providers.

7.    Rise of the Robots!

According to the Uptime Institute’s Abnormal Incident Reporting (AIR) database, human error accounts for 70% of unplanned data center downtime. (As Uptime’s Hank Seader points out, at root cause, nearly ALL data center failures can be attributed to human error.) 

There are some strategies that will help reduce these errors. For example;

  • As Schneider Electric points out, knowledgeable, highly trained data center operators using tested, formal operating procedures can go a long way toward reducing unplanned downtime.
  • DCIM tools that provide actionable, accurate and timely data regarding infrastructure conditions can also decrease human error.

But if you really want to start reducing data center downtime due to human error we need to consider replacing data center humans with data center robots.

We have already starting to see a few robots creep into data center electrical rooms.  Remote circuit breaker racking solutions such as those offered by CBS ArcSafe are technically robots.

But more significantly, in 2013 Google quietly purchased 8 robotics companies including the industry leader, Boston Dynamics. Boston Dynamics created the spooky Big Dog and Cheetah robots and the DARPA competition ATLAS robots.

Library2Google has been tight-lipped regarding its plans for their newly acquired robot tech. But I’m betting that the data centers on Google’s drawing board will look more like the robot controlled library at the University of Chicago or the inside of a tape drive silo.

Imagine robots racking and stacking servers in 50’ tall racks.

8.    Data Center Energy efficiency shifts focus to the left side of the PUE decimal place

Ever since the EPA dropped its report to Congress in 2009, a steady stream of green metrics, technologies and strategies have emerged. Most of this tech is focused on reducing the power consumption associated with the mechanical, electrical and other ancillary systems that data centers require. Incredible feats have been accomplished and energy use has been reduced greatly. As a result, many new data centers are seeing legitimate <1.1 PUEs.

However, many late model data centers are tapped out when it comes to mechanical and electrical system energy efficiency. It’s time for the IT manufacturers to start making reliable, innovative leaps forward in server efficiency. In 2014, I expect to see companies like Servergy lead a march into a new era of data center efficiency.

Finally, some harsh truths for a couple of groups that are not likely see growth…

9.    Local and regional manufacturer’s rep firms feel the pinch.

The increasing number of small and medium businesses that decide against building their own data centers and server rooms is bad news for the local manufacturer’s representatives. The guys that sell racks, power strips, servers, UPS systems and other infrastructure to this marketplace are in for a bumpy ride as their marketplace rapidly shrinks.

Manufacturer’s reps will need to move upmarket and fight it out for the business of the growing colos and cloud providers if they hope to survive. Unfortunately, the manufacturer rep firms will find that the colo and cloud owners are extremely adept at finding the margin in infrastructure deals and cutting it out.

As a result, most of the colo and cloud accounts are being taken direct by the manufacturers who are also struggling in an increasingly savvy and competitive marketplace.

10.    Fewer data center facilities personnel needed

Any time an industry adopts a technology that radically increases efficiency and automates tasks that were previously done by labor, jobs disappear.  In fact, the whole point behind developing efficiency and automation is to reduce the amount of labor needed. Reduced labor is another term for fewer jobs.

DCIM, robots and other automation tools are allowing data centers to squeeze efficiency from their infrastructure and will eventually allow them to trim their labor force.

If a significant part of your job consists of walking around a data center with a clipboard and taking readings from data center infrastructure GUIs, somewhere someone is pitching the idea that a DCIM sensor and software package can replace you.

The sheer number of data centers that are needed to house all of our data should result in a net gain in data center jobs. However, the days when data center personnel could be “facilities” only are rapidly drawing to close. To be competitive in the 2014, workers must understand the infrastructure, the software that monitors and controls it and the mission critical loads it supports.

That’s that I think. What do you think 2014 holds for us?