Top 6 Ways to Improve Data Center Efficiency

By: Daniel Bodenski, PE, LEED AP, Director of Strategic Solutions at Electronic Environments Co.

The data center has become a staple of modern society, making the technology that we use every day possible.  Today, everyone from small start-up organizations to multi-billion dollar corporations utilize mission-critical facilities to house their vital data, and as the Internet of Things (IoT) and Big Data continue to proliferate, our demand for more data centers will only increase.

With growing energy costs and data center energy consumption nearly 100 times higher than that of a typical commercial building, data center owners and operators are placing a higher focus on improving energy efficiency within their facilities.  Maintaining energy efficiency is critical to running a reliable, high-capacity, and cost-efficient mission-critical facility.  At Electronic Environments Co. (EEC), we are dedicated to enabling our clients to develop the most efficient and profitable data centers possible, allowing for maximum uptime while minimizing capital and operational costs.

When it comes to data center energy efficiency, there are six key ways you can improve your bottom line while still ensuring total reliability.  Below, we will examine these key strategies and help you answer the question, “How can my data center be more energy efficient?”

  1. Assessments

Performing a detailed assessment of your data center’s operational performance will give you clear and concrete insight into the particular ways your data center can be improved, outlining the individual areas in which current energy efficiency practices may fall short.  Review of airflow management, implementing a detailed PUE analysis, and obtaining real-time data hall temperature measurements, are all important metrics to assess in order to develop a fully strategic plan to lower energy costs. Data center assessment professionals are equipped to provide comprehensive results through in-depth analysis and can provide recommendations for design, installation and maintenance improvements resulting in quick and cost-effective solutions. An assessment can also be used to prepare for external audits, and provide foundational data for developing thorough strategies.

  1. Equipment Upgrades

As society evolves, so too do our technologies, putting increased demand on data center capabilities. Equipment upgrades are necessary to maintain a robust and reliable facility. Moreover, in order to make data centers increasingly efficient, new technologies are continually developed that reduce overall energy consumption, such as ECO mode Uninterruptible Power Supply (UPS), 380V DC power systems, lighting system retrofits, efficient chillers, and more. By knowing what new technologies exist and understanding the return on investment of many of these upgrades, you may be able to use many to your advantage within your data center’s lifecycle.

  1. Maintenance

If your current equipment is unreliable or beyond its normal lifespan, it could be adding to your operating costs and could pose a serious threat to reliability.  Downtime is the number one critical issue, as it will not only hurt your bottom line, but your reputation as a reliable organization. The Ponemon Institute reports that data center downtime costs an average of $7,900 per minute. Can you afford that? By employing a comprehensive maintenance routine, trained specialists should be engaged to check generator heaters and batteries, test load banks, sample generator coolant, fuel and oil, and regularly exercise overcurrent protective devices. These activities, coupled with implementation of an on-demand Asset Management system, will increase operational efficiency and reduce overall critical system downtime.

  1. Dynamic Cooling Management

Every data center is unique, so its cooling solutions should be as well.  Cooling plays a critical role in the energy efficiency of a data center. Finding the correct model for your individual facility is of capital importance. With a dynamic cooling model that’s easy to deploy, you can see immediate energy savings, more efficient network transformation, and increased network reliability. Instead of zone-level control, fans are individually optimized based on real-time readings, utilizing rack sensors and control modules to collect temperature requirements and Computer Room Air Conditioning (CRAC) airflow and power metrics, resulting in a fully optimized, intelligent cooling system.

  1. Airflow Management

Poor airflow management leads to a lot of undesirable results, including the recirculation of supply air, causing hotspots and reducing the overall effectiveness of the data center’s cooling plant.  By implementing simple airflow management techniques, such as adding floor grommets, implementing partial or full containment, and adding blanking panels, data center operators can see reduced plenum losses, and immediate energy savings. This is a simple, low-cost method to reap instant financial benefits and improve Power Usage Effectiveness (PUE).

  1. Baseline Energy Reduction

Sustainable energy sources such as solar, fuel cell and wind power are becoming more and more commonplace within data centers to reduce overall energy use, shrink their carbon footprint and become more energy independent. Not only can sustainable energy sources reduce energy usage, self-contained power plants can also offer data center operators the option to develop a micro-grid, which decreases the reliance on an aging, electrical infrastructure and provides a strategy for modular data center solutions.

To learn more about these six strategies for enhanced energy efficiency, check out our eBook, “6 Ways to Improve Data Center Energy Efficiency”.  If you would like more information about any of these solutions or feel that you could benefit from customized professional assistance, please visit www.eecnet.com or email us at info@eecnet.com.

About the Author:

bodenskiDaniel Bodenski, PE, LEED AP, is Director of Strategic Solutions at Electronic Environments Co. Mr. Bodenski has over 20 years of experience in mechanical systems design and project management for mission critical facilities. He has managed several large design, due diligence, site assessment and commissioning projects for telecommunications, healthcare, financial and retail data center clients. At EEC, he proactively increases facility reliability through implementation of new technology for mission critical facilities.

Key Data Center Stakeholders Offer Their Perspectives On PUE in the Mission-Critical Facility

By Dan Bodenski, Director of Strategic Solutions, Electronic Environments Corporation

Power Usbodenskiage Efficiency (PUE) is currently considered one of the most important metrics a data center team can utilize to assess a data center’s current and potential energy efficiency.  PUE is the term we use to define the ratio of total energy consumption throughout a data center including all fuels, divided by the total energy consumption of IT equipment.  This go-to metric was originally developed by the Green Grpueid Association in 2007, created as a way to definitively measure and track data center efficiency.  Since its inception, the PUE metric has expanded its usefulness outside of a simple end-user tool for operators.  Today, PUE is considered by many a Key Performance Indicator (KPI) of a mission-critical data center facility.

According to Green Grid, three separate levels exist for the measurement of PUE, each providing their own benefits and requirements:

  1. The first level, known as “basic” measurement, measures IT equipment energy at Uninterruptible Power Supply (UPS) output on a weekly or monthly basis;
  2. The second level, known as “intermediate” measurement, allows energy to be measured at the Power Distribution Unit (PDU) outputs;
  3. The third level, the most accurate measurement, requires a high level of technology coordination, data collection and human interaction. A great way for facilities to reach this level of accuracy is to install a PUE measuring device such as a kWh meter with help from an experienced firm.

Whether their roles focus on design and engineering, operations or C-level management, key stakeholders within the data center leverage PUE as a core determinant for evaluating and analyzing a facility’s effectiveness and potential.  When PUE data is used properly within a mission-critical environment, the results can justify added environmental enhancements and enable cost savings through increased energy efficiency as well as revenue growth from monetizing access server capacity.

So, how do the different players in the lifecycle of a data center really view PUE?  Designers, engineers, operators and executives all have their hands in different aspects of the facility, so it would stand to reason that each has their own approach to using PUE in order to fulfill their specific role.  Below, we’ll take an insider look into how each of these stakeholders leverages PUE to satisfy customer demands and create a more efficient, mission-critical environment.

Designers / Engineers

De sign and engineering teams are continually pushed to develop mechanical and electrical designs that will drive energy efficiency while simultaneously ensuring maximum uptime and enabling continued innovation.  This balance can be achieved by understanding and considering the PUE of a facility, which provides a transparent view into its energy consumption.

In some cases, taking advantage of the surrounding environment as well as documented, low risk strategies such as increasing the supply air temperature and/or chilled water temperatures can mean big savings.  To get the best results, design teams should adhere closely to Green Grid’s PUE definition of components during initial design and analysis while properly identifying source energy; this will ensure that PUE calculations presented in the initial design will match the ultimate results.

Operators

Today’s data center operations team are under serious pressure to reduce energy use within existing data centers; however, these solutions must fit within the framework of a live, operational facility. Management of real-time planning activities and ensuring maximum availability of critical infrastructure are at the top of operators’ responsibilities lists. Not far behind is PUE, which provides operations teams with a KPI to deliver and report upon to senior management on a regular basis. Through this deep understanding of a facility’s energy usage, operators justify new and effective ways of reducing power loss and saving energy.

Executives

C-level data center executives take a big-picture approach to data center energy effectiveness, and PUE plays an important role in influencing their overall strategies.  PUE represents approximately 8 to 15 percent of the Total Cost of Ownership (TCO), and requires regular monitoring and analysis because it is a KPI that executives often tout to corporate clients and potential third-party customers.

In order to have a successful, energy efficient mission-critical facility, these varying perspectives on PUE must be considered by the entire data center team, not just each stakeholder for their specific professional purpose. Through this combined effort and 360-degree approach, the full mission-critical team can ensure long-term facility success.

 

Enhancing Video Distribution through the New Edge of the Internet

By: Laurie Samper, Technical Writer, iMiller Public Relations

A recent TechZone360 article, “Network Shifts to Support Video Distribution”, examines trends within network connectivity and how the Internet is accessed through the lens of industry-leading data center company, EdgeConneX®.  In this blog series, we will further explore the details involved in network evolution, surveying topics such as global network connectivity, non-linear viewership, and mobile data traffic.  In order to begin our journey and gain a deeper understanding of current trends in the Internet community, it’s important to first understand the role of Edge Data Centers® within this evolving landscape.

For years, traditional carriers have focused their resources on major markets and big cities across the nation, disregarding the need as well as opportunities that arise from bringing content closer to users within underserved, or tier 2 markets.  One of the single-most pressing drivers of upgrading networks and establishing tier 2 connections is video distribution.  EdgeConneX can attest to video’s exponential market growth.  When the company established its first Edge Data Center in June 2013, eight peering points were all that existed for video across the country.  Today, EdgeConneX has opened 22 additional facilities throughout the U.S., increasing the number of peering points to 31 in just two short years.

So what is the “edge” exactly?  The edge is an extension of network reach into local, underserved markets, bringing the content and the eyeballs together in the same location.  By establishing new peering points within local markets, content is brought closer to end-users, enabling the cost-effective, high-quality and efficient delivery of cable network DVRs, 4K Ultra HD streaming, gaming, video integration into social networks and more, alongside enterprise video needs.  By creating a new Edge of the Internet, EdgeConneX’s Edge Data Centers allow customers to multiply the amount of offloaded traffic in a matter of months.

Keeping up with growing network demands is what EdgeConneX does best as it continues to extend its reach into more locations, including plans to expand internationally.  Content is king and EdgeConneX is enabling customers to take advantage of evolving technology and trends such as video distribution through enhanced delivery of digital content to any device, anywhere, anytime.

To learn more, visit www.edgeconnex.com and stay tuned for the next blogs in our series.

The Next Generation of Paralleled Generators

Designing a data center takes a bit of ingenuity and finesse to create a simple, streamlined facility operating at peak efficiency.  Having the proper technology can do just that, providing owners and operators with a solution to reliable and affordable functionality.  One of the latest developments that is helping many designers create an optimal mission-critical environment is known as modular integration, a new approach to generator systems that bypasses the complexity of traditional paralleled generators.

As we continually strive for innovation and efficiency, many common data center practices have become things of the past, making way for more streamlined, cost-effective approaches.  In the past, traditional paralleling was designers’ only option when creating a data center, forcing them to accept complex systems, high costs and large physical footprints as the norm.  Today, these issues are virtually nonexistent as we move into the next phase of data center power innovation – digitally paralleled generators.

Integrated and traditional paralleling systems are very distinct from one another, and through a deeper understanding of their unique qualities, it becomes easier to discover why integration could be the best choice for your facility.  Four major requirements are considered when analyzing the functions of paralleled generators, including synchronization, load sharing, protection and point of synchronization.

Synchronization

A necessary element in all paralleling systems, synchronization within traditional generators relies on third-party components to consistently regulate all controls.  Within onboard integrated systems; however, these controls are incorporated digitally inside the generator itself, eliminating the need for third-party involvement and added  cost.

Load Sharing

It is also important to note that the function of load sharing should be equalized between generators to ensure no single unit becomes the “motor”, pulling load from the other.  Traditionally, this is controlled via cabling; however in the new integrated system, load sharing is regulated digitally to allow for more flexibility in facility design.

Protection

As critical and expensive equipment, generators must be protected from all potential issues and threats. When it comes to reverse power, voltage and over current protection, traditional setups relied on a third-party for protection.  By contrast, in an onboard, integrated parallel system, each of these components live within the generator set, creating more space, control and flexibility throughout the data center.

Point of Synchronization

When generators achieve synchronization, it is necessary to employ a connection to the emergency bus, or point of synchronization.  Traditionally, this has been done utilizing motorized breakers within the gear; however, integrated paralleled generators do so using switches or motorized breakers located directly onboard the generator set.

This innovative technology, when used properly, can deliver staggering results by reducing overall complexity, shortening installation lead times, conserving precious floor space, lowering costs and making it easier than ever to be ready for future expansion.  This reliable and cost-effective solution is more achievable than you think and can make a world of difference.

If you would like to learn more about modular generator and integrated paralleling systems, view this comprehensive article by Electronic Environments Corporation (EEC)’s Director, Chris Avery, found here.

Data Centers Seeking Energy Efficiencies Have Options

By Ken Rapoport, CEO of Electronic Environments Corp.

Our advice to clients who engage us for assistance in building and retrofitting data center facilities for energy efficiency: consider the foundations upon which your data centers are built and the assets deployed inside them.  Reliability and energy efficiency are the overarching objectives.  This approach reinforces that the data center will perform to expectations in meeting the requirements of their business.

In scenarios where the client is building a new facility, the energy efficiencies offered by large cloud providers can be an attractive option to consider based on a number of factors.  For one, these providers can locate their facilities in geographic regions where the cost of energy is comparatively lower, for example in the northwest of the United States.  They can also leverage customized servers that are able to operate at higher temperatures and higher efficiencies.  Lastly, large cloud providers can take advantage of advanced scalability and uniformity capabilities.  The net result can mean levels of Power Usage Effectiveness (PUE) of 1.02 or 1.01 — a significant achievement.  However, a sizable number of businesses will not have these options, and therefore rarely achieve PUE levels of less than 2.0.

In order to reduce their PUE levels, EEC advises customers in several ways.  First, we conduct assessments and deploy advanced technologies — for example, energy-efficient mechanical systems that take advantage of free cooling.  The good news is that a number of powerful new technologies will deliver impressive returns and are available at comparatively low cost.  These include intelligent air distribution and management systems that can achieve energy usage reductions of between 20 and 40 percent in just two short years.

Another option that can deliver greater energy efficiencies is to retrofit your legacy data center technologies.  For example, if you’re operating a low-density data center, one that’s operating at 50 watts per square foot, you can deploy direct water-cooled racks or in-row cooling in zones in order to accommodate potential future zones of higher density servers.

For more information about the relationship between data center strategy and energy efficiency, download our free white paper, or view the EEC Google Hangout.

For more information about EEC, visit www.eecnet.com.

Introducing the Hybrid Data Center of the Millennia

dlrThe key to survival in a competitive business environment for the millennia is to employ cloud strategies that work.  For many of today’s companies, that means swapping out their monolithic cloud strategies for a hybrid cloud environment, which offers end users unsurpassed options that include the economical savings, scalability and responsiveness of public clouds coupled with the security, high performance and reliable infrastructure of a private cloud solution.  Utilizing hybrid models, business environments can distribute their workloads more effectively as well as bridge their legacy systems and state-of-the-art architecture in order to recognize considerable CAPEX and OPEX savings.

As Hybrid cloud continues to address the ever-changing the environment of enterprise IT architecture, it will also have a profound influence on the data center as we know it.

The hybrid data center, offers businesses a valuable location for their hybrid IT solutions by delivering connectivity to the most important cloud service providers, Internet Service Providers (ISPs), Internet Exchanges (IXs), network service providers, company-owned data centers and office locations, and more. Digital Realty, a leading global provider of data center and colocation solutions, offers customers these key data center facilities complete with a unique global portfolio of cloud-connected solutions that enable them to build, deploy and execute a successful hybrid cloud strategy.

Digital Realty’s GlobalConnect suite is designed to accelerate business growth while simplifying and streamlining hybrid cloud deployment by offering richer direct-connect options, a global data center footprint and enhanced support for hybrid data center environments.  The company’s connectivity solutions, Digital MetroConnect, Digital CloudConnect, Digital IPConnect and Digital PrivateLine, are available in over 130 data centers, delivering secure access to over 50 cloud service providers and 1,000 network service providers.  In addition, Digital Realty customers benefit from direct fiber connections to major cloud providers such as Amazon Web ServicesTM, Microsoft Azure and IBM SoftLayer, as well as dedicated cross-connects to VMware’s vCloudAir public cloud options for infrastructure, disaster recovery and applications.

Digital Realty will be launching its new suite of global connectivity solutions for hybrid cloud and data center environments at International Telecoms Week (ITW) 2015, taking place May 10-13 in Chicago, IL at the Hyatt Regency and Swissôtel Chicago.  ITW is the annual meeting point for the global wholesale telecommunications community, expected to converge over 6,000 delegates from 1,870+ companies representing more than 140 countries at its 2015 conference.

Discover how Digital Realty’s global connectivity solutions deliver the Right Workload, to the Right Place at the Right ValueTM by scheduling a meeting with a Digital Realty representative at ITW 2015 or stopping by the Digital Realty meeting area at the Hyatt Regency BIG Bar during the conference.

Digital Realty is also hosting The Cloud Ecosystem LIVE! Executive roundtable during ITW on Tuesday, May 12 from 4:00 PM – 5:15 PM in Alpine 2 in the Swissôtel Chicago.  Moderated by Digital Realty’s General Manager of Colocation & Connectivity John Sarkis, this exclusive executive roundtable will bring together thought leaders from across the cloud ecosystem to provide key insights into the delivery and adoption of cloud computing across various hybrid environments.  Participants will also explore how cloud adoption is changing the face of the business cycle, including driving business decisions and new revenue opportunities.  To join Digital Realty for cocktails, appetizers and great discussion at The Cloud Ecosystem LIVE! roundtable, RSVP here.

The Cloud Ecosystem is LIVE! at ITW’15

dlrForecast highlights for Cisco workload predictions indicate that by 2018, cloud data centers will be processing more than 78 percent of workloads.  The latest Cloud Computing research gathered by IDG indicates that 69 percent of organizations are utilizing the cloud to run their infrastructure or applications.  Gartner included Cloud computing as being named the first of its 2015 top 10 strategic technology trends.

Cloud has caused the industry to shift – creating a wave of change that has a direct impact on corporate IT departments across every supporting industry player globally.  As IT leaders continue to adopt the Cloud across their organizations, however, they must first ask themselves these key questions: What kind of impact does cloud have on the business cycle? and What are certain major cloud projects the industry is focusing on this year?

Experts from the cloud ecosystem, including Digital Realty, VMware, Level 3 Communications, EdgeConneX, euNetworks and Telstra, will provide expert answers to these questions and more and offer in-depth insights into the distribution and implementation of cloud computing within several hybrid environments as they come together at The Cloud Ecosystem LIVE! roundtable during International Telecoms Week (ITW) 2015 in Chicago, IL.  The discussion will be led by John Sarkis, GM, Colocation and Connectivity, Digital Realty and will take place Tuesday, May 12 in the Swissôtel, Alpine 2 at 4:00PM.  Attendees will also enjoy cocktails and light appetizers before, during and after the panel discussion.

Digital Realty is a premier provider of premium data centers and a Prime Sponsor of ITW 2015, with over 130 globally connected data centers across four continents and secure access to over 50 cloud service providers as well as 1,000 network service providers.  Designed to connect with customer cloud deployments and owned data centers, Digital Realty’s global connectivity solutions deliver the Right Workload, to the Right Place and at the Right ValueTM.  Digital Realty’s data centers in major markets are connected via Digital MetroConnectTM to enable the optimal mix of latency, location and value.  The company also features several OPEN-IX®-certified US locations and non-profit peering exchanges across the US and Europe.

International Telecoms Week (ITW) is the annual meeting point for the global wholesale telecommunications community, including Tier 1, Tier 2 and Tier 3 carriers, mobile / wireless operators, ISPs, VoIP providers and technology partners from the voice, data, satellite, sub-sea and fixed-line markets.  Over 6,000 delegates are expected to attend this year’s conference, taking place May 10-13, 2015 at the Hyatt Regency and Swissôtel in Chicago, IL.

To register for The Cloud Ecosystem LIVE! roundtable, please click here.

To request a customer meeting with the Digital Realty team during ITW 2015, click here.

Remediating Environmental and Energy Data Center Concerns with CRAC / CRAH Retrofit

By: Kevin O’Brien, President – Mission Critical Construction Services Division, EEC

We must address the environmental footprint current data center facilities have on climate change due to their immense energy consumption. Extensive industry research indicates that data centers presently consume approximately 3% of the world’s electricity, while emitting nearly 200 million metric tons of CO2.  As data centers continue their proliferation to support the growth of Internet of Things (IoT), Big Data, social media and cloud computing, their energy consumption and CO2 will only increase.  One cost-effective and reasonable method to decreasing the negative environmental impact and improving productivity within the data center is optimizing environmental parameters such as Computer Room Air Conditioning (CRAC) and Computer Room Air Handler (CRAH) units.

For the past 13 years, I have been personally involved with the implementation of more than 3,000 CRAC / CRAH units – both new and old.  Many of these older units can only modulate on and off, so they operate at constant volume all day, every day and consume enormous amounts of energy. In this case, newer models may need to be purchased as they possess the capability to modulate fan speeds in order to save energy thanks to built-in Variable Frequency Drives (VFDs).

However, purchasing new CRAC / CRAH may not be in the budget for many data center operators.  If this is the case, operators can still cost-effectively optimize the performance of existing systems by retrofitting CRAC / CRAH units with VFDs.  This can substantially decrease energy consumption and cost as simply lowering a fan’s speed by 20% can cut power requirements in half.  In my experience, I’ve learned that 2008 CRAC / CRAH units are the most cost-effective to upgrade with Emerson VFDs.  Couple this with the 20% to 50% energy savings potential of Direct Expansion (DX) and chilled water units, and you can save even more.  While savings will vary at each facility given specific IT load in the data room and equipment configuration, the ROI potential is worth the initial investment.  Another logical step to take during this time is to install an airflow monitoring system, which enables maximum energy savings thanks to reduced energy consumption and improved Power Usage Effectiveness (PUE).

Electronic Environments Corp. (EEC) is here to help you with your CRAC / CRAH unit retrofit and airflow monitoring system projects as well as a wide array of preventative maintenance.  By monitoring the inlet and outlet rack temperatures alongside the return air entering the CRAC unit, EEC can also help you match your underfloor airflow with IT equipment needs.

If you recognize the importance of energy consumption, the search is over.  At EEC, we provide solutions that are economically viable and environmentally effective for your data center utilizing the appropriate technology and services for ensuring a ‘greener’ and more cost-effective future.

To learn more about EEC, visit www.eecnet.com.

Please welcome Pacer Group to Data Center Discovery!

Pacer has proven capabilities to a wide range of manufacturers who require UL/CSA approved wire; such as makers of batteries, fork lifts and golf carts, industrial equipment, alternative power, appliances, data storage. We also have a wide range of products that serve automotive, performance racing, truck, RV, and offroad vehicles.

Choose Pacer for quick, accurate, and cost competitive wire, cable, and electrical parts.

For more information please visit the Pacer Group’s FREE corporate profile at Data Center Discovery

For more information about how your company can get listed in the Data Center Discovery global directory of data center solution providers please email becca@datacenterdiscovery.com

380VDC Power Has Evolved, Providing Data Centers New Ways to Minimize Energy Loss and Improve Reliability

By Jim Stark, P.E., Principal of Engineering, Electronic Environments Corporation

380VDC

Quickly becoming one of the most viable options when powering a data center, modern advancements in the Direct Current (DC) power distribution model are poised to increase energy efficiency and reliability like never before.

Traditionally, data center power distribution models follow a consistent formula, including multiple voltage power conversions between the electric utility and the server.  Distribution transformers, Uninterruptible Power Supply (UPS) systems, and Power Distribution Units (PDU) all introduce AC (alternating current) to DC conversions and voltage transformations in the power chain, oftentimes resulting in wasted energy.  This typical power distribution model can include a:

  1. Conversion from 480VAC to 480VDC within the UPS system
  2. Conversion from 480VDC back to 480VAC within the UPS system
  3. Transformation from 480VAC to 208VAC at the PDU
  4. Conversion from 208VAC to DC voltages within the server power supply

In order to eliminate many of these unnecessary power conversions, energy can be distributed at a DC voltage directly to the server power supplies as opposed to converting the DC power in the UPS back to AC power and then converting back again to DC at the server.  Depending upon the age and technology of power equipment utilized, the conversion to a DC power distribution model can result in efficiency gains of 10 to 20 percent thanks to a reduction in the number of power conversions.  Though the existence of DC power distribution in the data center industry is nothing new, modern technological developments have made this system more attainable than it was previously.  In fact, many telecommunications companies have taken advantage of the efficiency and reliability of DC power systems for decades.  Some of the benefits of DC power distribution over AC power distribution include:

  1. Fewer power conversions between AC and DC voltages result in a smaller parts count, which improves reliability and reduces maintenance costs;
  2. Fewer power conversions increases system efficiency and reduces energy costs;
  3. Less equipment may reduce capital investment of a comparable, new AC distribution system;
  4. Less equipment also reduces the footprint required on site; and
  5. Harmonic distortion and phase balancing are not a concern with DC power distribution, which eliminates the need for power filtering and minimizes stranded capacity.

Though many telecommunication companies have traditionally relied on low voltage DC systems (48VDC), the higher power consumption requirements of data centers fit better with 380VDC systems. Since data center servers run with higher power densities, this results in an extremely high current draw at 48VDC and requires much larger conductor sizes to provide ample power. The use of 380VDC eliminates this need, while working well within typical server power supply limits.  Since they operate in the same voltage range, users of 380VDC can additionally benefit from the ability to integrate with renewable energy sources such as photovoltaic (solar) arrays and fuel cells.

Two major issues which have delayed the acceptance of 380VDC systems in the past include: the availability of DC power systems and server power supplies, and safety concerns related to high voltage at the rack and server levels.  Thanks to recent innovations and advancements, DC power systems are now more readily available, prompted by the successful deployment of DC distribution in Asia and Europe and due to groups like EMerge Alliance, which have developed standards for the commercial adoption of DC power.  Noting the potential of this shift in data center power distribution, several manufacturers are now producing DC circuit protection and power supply cord connectors which address concerns with user safety related to DC voltage and arc protection.

The data center ecosystem is experiencing an exciting shift in traditional practices, making room for the development of a more sustainable environment as 380VDC and other energy efficient practices become widely adopted.

Join us on December 9 at DatacenterDynamics Converged Dallas to explore this topic further when we present, “Is it Finally Time for 380VDC Power in the Data Center?” at 12:20 PM local time.

To meet with Mr. Stark during the event, please email info@eecnet.com.

To learn more about Electronic Environments Corporation (EEC), visit www.eecnet.com.