Reminiscing Past and Predicting Future Data Center Trends

By: Kevin O’Brien, President, Mission Critical Construction Services, EEC

EECObrien3

After nearly 30 years in the data center industry, it’s interesting to see how the advent of new technologies and current events have impacted the data center’s evolution and shaped the structures we see all over the world today.  Here’s a blast from the past and a look into the future of data center facilities:

The 1980s and 1990s

I began my career in the data center industry in the ‘80s as a Facilities Manager for a large financial services company headquartered in New York.  At this time, it was commonplace that data center facilities be located within the same building as trading and office spaces.

1988 marked the year that we began construction of our very first remote site facility outside of New York, solely dedicated to data and telecommunications.  Prior to our data center, the site was home to an ITT communication HUB in New Jersey that functioned as the headquarters for the “Hot Line” link between Washington, DC and Moscow.  During this time, the majority of functions were analog and having the remote site gave us a way to increase the redundancy and reliability of electrical and mechanical systems.  Back then there was no Tier certification system, but through this we were able to reach what is now considered a Tier II standard for electrical systems and additionally achieved the equivalent of 2N on the UPS.  In the late ‘80s, load-in data centers ranged from a mere 35-50 watt maximum per square foot.

To meet the rise in demand for fiber and reliable computers, many companies began choosing remote sites during the ‘90s.  In 1989, coming as no surprise to industry professionals, the 7×24 exchange started publishing articles sharing the common experience on seeking ways to improve overall reliability throughout mission-critical facilities.  Stemming from this in the early ‘90s, the Uptime Institute was created, bringing with it the widely used Tier certifications.

The Dotcom Era

The next noteworthy paradigm shift occurred during the dotcom boom.  To meet exploding demand, companies began constructing data centers containing more than 100,000 square feet filled with racks equating to roughly 50-75 watts per square foot at full capacity.  Companies were able to build anywhere in the world thanks to the rapid proliferation of fiber as well as the economic upswing.  With unbridled optimism, companies overbuilt only to watch the stock market plummet following the events of September 11.  However, after a few years, the predication of demand for servers was finally brought to fruition.

Following the dotcom collapse, the Sarbanes Oxley (SOX) law was developed in 2002, requiring data centers that supported financial trading to locate their facilities within a designated number of fiber miles from Wall Street.  In addition to its proximity standards, SOX also required the construction of a separate, synchronous data center for redundancy, resulting in construction growth throughout New Jersey.  By this point, sites began to encroach upon the 100 watt per square foot barrier, while many reached Tier III & IV status.  Square footage and costs continued to rise as additional space was needed to support more robust infrastructure.

Demand for Density and Redundancy

Alongside the need for greater density and redundancy, this era saw the development of the building square foot ratio of raised floor to infrastructure.  For example, 100,000 square feet of raised floor area (also known as white space) at 100 watts per square foot in a Tier III configuration would have a ratio of 1-to-1.  If at this time the density of the space increased to 150 watts per square foot, then the ratio would also increase to 1-to-1.5.  In other words, one would need 150,000 square feet of space to support the infrastructure with the same 100,000 square feet of raised floor.  This type of infrastructure was developed to meet the need of IT load support that – in most cases – never actually materialized.  As density increased, the industry began utilizing a proper measurement of kilowatt (kW) per rack instead of watts per square foot for a more accurate measurement.


More Power/More Efficiency

Once the new measurement model took hold and the industry began rating in kW or cost for kW, people began to clearly understand how much power was actually being used and at what cost.  This shift triggered an energy rating system known as Power Utilization Effectiveness (PUE), creating a universal measuring system of the efficient or inefficient use of energy throughout a facility.  As a result, data center managers were now held accountable for energy consumption causing them to seek free cooling and higher levels of efficiency, despite the fact that densities were increasing from 3 to 4 kW per rack to 16 to 25 kW per rack. In response, some facilities such as Yahoo! completely eliminated mechanical cooling (chillers) in favor of utilizing outside air to provide “free cooling” through a simple structure called the “chicken coop”.  While this worked well for Yahoo! and other similar data centers, it was not an ideal solution for most enterprise data centers and “hot aisle, cold aisle” became the norm.  Through this practice, operators were able to isolate the load, resulting in higher levels of energy efficiency and the trend of elevating temperature inside the data call.  The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) helped clarify the raised temperatures of server inlets, but the battle still remains to determine which PUE is smaller.

The shift towards energy efficiency and “greener” facilities propelled industry leaders to develop more innovative data center technologies including high efficiency chillers, adiabatic cooling, fuel cells, solar power and 380V Direct Current (DC).  These recent developments may foreshadow the future elimination of multiple Alternating Current (AC)/DC conversion and mechanical compressors, making way for a more distributed generation of clean power.  Likewise, alternative energy sources are also growing in cost efficiency.  One example are Bloom Box fuel cells from Bloom Energy.

For the the past 26 years, I’ve had a front row seat to the continuous evolution of the data center and with each change, I’ve had to adapt my environment to new design and construction standards.  Whether it’s an Internet or cloud data center, big open box space or modularized data halls and pods, I’m excited to see all of the growth and change that the next 10 years will bring.

Please welcome Pacer Group to Data Center Discovery!

Pacer has proven capabilities to a wide range of manufacturers who require UL/CSA approved wire; such as makers of batteries, fork lifts and golf carts, industrial equipment, alternative power, appliances, data storage. We also have a wide range of products that serve automotive, performance racing, truck, RV, and offroad vehicles.

Choose Pacer for quick, accurate, and cost competitive wire, cable, and electrical parts.

For more information please visit the Pacer Group’s FREE corporate profile at Data Center Discovery

For more information about how your company can get listed in the Data Center Discovery global directory of data center solution providers please email becca@datacenterdiscovery.com

380VDC Power Has Evolved, Providing Data Centers New Ways to Minimize Energy Loss and Improve Reliability

By Jim Stark, P.E., Principal of Engineering, Electronic Environments Corporation

380VDC

Quickly becoming one of the most viable options when powering a data center, modern advancements in the Direct Current (DC) power distribution model are poised to increase energy efficiency and reliability like never before.

Traditionally, data center power distribution models follow a consistent formula, including multiple voltage power conversions between the electric utility and the server.  Distribution transformers, Uninterruptible Power Supply (UPS) systems, and Power Distribution Units (PDU) all introduce AC (alternating current) to DC conversions and voltage transformations in the power chain, oftentimes resulting in wasted energy.  This typical power distribution model can include a:

  1. Conversion from 480VAC to 480VDC within the UPS system
  2. Conversion from 480VDC back to 480VAC within the UPS system
  3. Transformation from 480VAC to 208VAC at the PDU
  4. Conversion from 208VAC to DC voltages within the server power supply

In order to eliminate many of these unnecessary power conversions, energy can be distributed at a DC voltage directly to the server power supplies as opposed to converting the DC power in the UPS back to AC power and then converting back again to DC at the server.  Depending upon the age and technology of power equipment utilized, the conversion to a DC power distribution model can result in efficiency gains of 10 to 20 percent thanks to a reduction in the number of power conversions.  Though the existence of DC power distribution in the data center industry is nothing new, modern technological developments have made this system more attainable than it was previously.  In fact, many telecommunications companies have taken advantage of the efficiency and reliability of DC power systems for decades.  Some of the benefits of DC power distribution over AC power distribution include:

  1. Fewer power conversions between AC and DC voltages result in a smaller parts count, which improves reliability and reduces maintenance costs;
  2. Fewer power conversions increases system efficiency and reduces energy costs;
  3. Less equipment may reduce capital investment of a comparable, new AC distribution system;
  4. Less equipment also reduces the footprint required on site; and
  5. Harmonic distortion and phase balancing are not a concern with DC power distribution, which eliminates the need for power filtering and minimizes stranded capacity.

Though many telecommunication companies have traditionally relied on low voltage DC systems (48VDC), the higher power consumption requirements of data centers fit better with 380VDC systems. Since data center servers run with higher power densities, this results in an extremely high current draw at 48VDC and requires much larger conductor sizes to provide ample power. The use of 380VDC eliminates this need, while working well within typical server power supply limits.  Since they operate in the same voltage range, users of 380VDC can additionally benefit from the ability to integrate with renewable energy sources such as photovoltaic (solar) arrays and fuel cells.

Two major issues which have delayed the acceptance of 380VDC systems in the past include: the availability of DC power systems and server power supplies, and safety concerns related to high voltage at the rack and server levels.  Thanks to recent innovations and advancements, DC power systems are now more readily available, prompted by the successful deployment of DC distribution in Asia and Europe and due to groups like EMerge Alliance, which have developed standards for the commercial adoption of DC power.  Noting the potential of this shift in data center power distribution, several manufacturers are now producing DC circuit protection and power supply cord connectors which address concerns with user safety related to DC voltage and arc protection.

The data center ecosystem is experiencing an exciting shift in traditional practices, making room for the development of a more sustainable environment as 380VDC and other energy efficient practices become widely adopted.

Join us on December 9 at DatacenterDynamics Converged Dallas to explore this topic further when we present, “Is it Finally Time for 380VDC Power in the Data Center?” at 12:20 PM local time.

To meet with Mr. Stark during the event, please email info@eecnet.com.

To learn more about Electronic Environments Corporation (EEC), visit www.eecnet.com.

Not Ready for Winter? Your Data Center Humidification System Should Be

By Jim Lundrigan, Vice President of Operations, Electronic Environments Corporation

Lundrigan_Head_Shot (2)

The cold is almost upon us once again and just like people, data centers can be prone to the winter blues, too.  As the temperatures drop and humidity decreases, it is important to ensure the necessary environmental adjustments are made within data centers to protect equipment.  This entails ensuring equipment which may have remained inactive during the summer and fall months is now able to operate with higher frequency at peak efficiency.  In the cooler, drier climate brought on by the changing seasons, this requires maintaining proper humidity levels, which is essential to achieving high availability and reducing operational costs within the data center.

Proper maintenance of data center humidification systems is necessary in drier climates to prevent static electricity that can build up and discharge – typically caused by cool, low humidity air moving throughout the facility.  Static electricity and electrostatic discharge (ESD) can lead to damaged computing equipment, including instances of blown fuses.  For the majority of data center facilities, staying within the American Society of Heating Refrigerating and Air-Conditioning Engineers (ASHRAE) recommended humidity and temperature guidelines (64.4° F – 80.6° F; 41.9° F – DP-59° F dew point) ensures a highly stable and effective environment for the efficient and reliable operation of mission-critical functions.

Identifying and remediating malfunctioning humidification equipment is also key to preventing water leakage resulting from blocked drainage.  Water leakage can severely damage the unit and surrounding IT equipment.  Malfunctioning humidification equipment is also the culprit behind the less obvious excess water vapor, which can slowly deteriorate components of a cooling unit.  Well-maintained systems are critical to reducing nuisance alarms, return calls and their related high price tags between preventive maintenance (PM) visits.  Monitoring and changing set points can also result in substantial power savings and increased energy efficiency.  At Electronic Environments Corporation (EEC), we’ve noticed a trend, much like that of raising intake air temperatures, in which customers are lowering humidification set points from 45% to 40% to 35% to reduce energy costs over time.

EEC has been providing customers with strategic guidance across all areas of data center systems, designs and environments for over three decades.  We help our customers prepare for and proactively prevent future data center challenges by providing a comprehensive scheduling of inspections and repairs.  This includes critical elements such as ensuring humidification systems are functioning at peak efficiency through the modification settings while preparing for seasonal changes and cooler, drier weather.  EEC’s humidification system inspections encompass:

  • Starting a humidification system and performing diagnostics;
  • Checking condensation pump operations and cleaning as needed;
  • Confirming that set points are operating correctly;
  • Examining overloads, fuses and electrical operations to ensure proper functionality;
  • Removing and cleaning humidifier pans and drain lines from build-up and deposits;
  • Adjusting pan water levels while testing and adjusting water overflow safety devices;
  • Calibrating humidity sensors; and
  • Maintaining water filtration systems that feed humidifiers (where applicable).

EEC leverages deeply-rooted data center expertise to develop and deploy customized solutions for data center maintenance in order to accommodate each customer’s unique business needs.  Examples of our customized solutions range from steam canister humidifiers, which offer simpler troubleshooting but a higher replacement cost, to UV systems, which require more maintenance, or ultrasonic systems, which requires larger up-front investments but run more efficiently and produce long-term energy savings.

While maintenance is critical during seasonal changes, it’s important to inspect equipment regularly year-round to ensure data centers function efficiently.  Prepare your humidification system and data center for the coming cold and dryness.  While you may not be ready for winter, your data center certainly should be; your business depends on it.

To learn how EEC can help your data center prepare for the cold, email info@eecnet.com or click here for more information.

AMS-IX USA Inc. Partners with CME Group to Launch First AMS-IX Chicago PoP

isx chicago

A new partnership between industry leaders has led to the establishment of an alternative interconnection model in the Chicago metropolitan area.  Expanding on its successful business model of AMS-IX in Amsterdam, AMS-IX (Amsterdam Internet Exchange) recently announced that its U.S. subsidiary, AMS-IX USA Inc., has partnered with CME Group to open the first OIX-1-compliant AMS-IX Chicago Point of Presence (PoP) located inside the CME Group Cermak Hosting Facility at 350 E. Cermak.

This is AMS-IX USA Inc.’s third deployment in the states, building on the success of AMS-IX New York in Digital Realty’s 111th 8th Ave. data center; DuPont Fabros Technology’s Piscataway, NJ facility; Sabey’s Intergate Manhattan data center at 375 Pearl Street; and the 325 Hudson Street interconnection facility; as well as AMS-IX Bay Area in Digital Realty’s 365 Main Street, San Francisco facility.  A second PoP for AMS-IX Chicago (to be launched with the support of another data center provider) is also expected to go live in 2015.

AMS-IX Chicago brings with it an affordable, neutral, and distributed Internet Exchange model, enabling connected parties of AMS-IX Chicago in one data center to exchange traffic with connected parties in another data center.  Like its New York and Bay Area counterparts, the Windy City PoP is instrumental in increasing geographic diversity, simplifying connectivity and reducing costs for businesses and end-users.  Furthermore, it serves as a key driver for economic stimulation in the Chicago metro area by attracting more business, stimulating both job growth and encouraging innovation.

The relationship between AMS-IX Chicago and CME Group provides companies connecting to the Chicago PoP the capability to interconnect and house their critical equipment in one of the world’s premiere data centers – a secure and modern facility located in downtown Chicago.    The strategic partnership also offers AMS-IX Chicago the ability to tap into a new market segment in need of reliable, resilient Internet and data centers for its mission-critical operations – CME’s content and financial services’ clients.  By peering directly with AMS-IX Chicago, these customers will benefit from reduced latency and transit costs.

For more information, visit www.chi.ams-ix.net and www.cmegroup.com.

Rising Demand for Data Center Construction Yields Three Strategic Appointments to the Electronic Environments Corporation Team

By: Laurie Samper, Technical Writer, iMiller Public Relations

According to analysts at TechNavio, the global data center construction market is projected to grow at a compound annual growth rate (CAGR) of 21.99 percent over the period 2013-2018. With growing customer demand for new data center construction, Electronic Environments Corporation (EEC) appointed three new members to its dynamic team: Mike Walsh, Technical Services Manager; Robert Hoffman, Project Executive; and Scott Willard, Northeast Region Senior Project Manager/Construction Manager.  The news of EEC’s expansion comes shortly after the recent onboarding of Mission Critical Construction Services Division President Kevin O’Brien in July.

Mr. Hoffman joins EEC as Project Executive, Mission Critical Construction Services, where he is responsible for overseeing all aspects of service quality as well as the development of the project management team along with its strategy, systems, controls and performance.  Prior to EEC, Mr. Hoffman served as Senior Project Manager at CBRE, where he oversaw several large technology infrastructure projects.  He also worked at Skanska USA as a Senior Project Manager for its Buildings Mission Critical team and Structure Tone, where he was one of the first employees of its mission-critical group handling national data center roll-outs for companies such as Teleglobe.

Mr. Walsh has played a key role at EEC, creating the company’s Technical Services Division as well as setting the standards for critical infrastructure construction.  As Technical Services Manager, he is responsible for executing all facets of pre-construction initiatives.  Prior to EEC, Mr. Walsh developed formulas to build a $200M data center for AT&T and has served as Technical Services Manager for numerous companies including the New England Center for Excellence for Gilbane Builders and Structure Tone Mission Critical.

As EEC’s Northeast Region Senior Project Manager/Construction Manager, Mission Critical Facilities Services, Mr. Willard is responsible for managing project schedules, developing project scopes and pricing, creating and maintaining EEC construction standards, and overseeing quality control functions during construction projects.  Prior to joining EEC, he served as a Senior Project Executive, Construction Division at Tocco Building Systems.

With over 25 years of industry experience, EEC is committed to supporting its customers through all phases of the mission critical lifecycle leveraging its unique, holistic approach, Mission Critical Lifecycle Services (MCLS).  This includes all aspects of planning, design management, construction, operations and maintenance and assessments.  The company’s new appointments bring a wealth of experience and knowledge to the EEC team, enabling it to further build on its customer commitment and propelling the company to the forefront of the data center and telecommunications facility design, build and maintenance services industry.

For more information about Electronic Environments Corporation and its Mission Critical Lifecycle Services, please visit www.eecnet.com and http://www.eecnet.com/Home/Mission-Critical-Construction-Services/.

AMS-IX – Spreading Its Mission for Simplified Internet Exchange

By: Mark Cooper, CCO AMS-IX

amsix

The Amsterdam Internet Exchange (AMS-IX) Association was founded 17 years ago to solve the growing problem of an overall lack of control over Internet traffic for many ISPs in addition to the continually increasing dependency on foreign exchanges for ample connectivity.  This innovative Internet Exchange in Amsterdam has provided ISPs and other types of Internet-related companies the ability to obtain an affordable and transparent network structure.  Since its inception in 1997, AMS-IX has expanded its reach through the formation of new subsidiaries such as AMS-IX USA, Inc. – which has given way to AMS-IX New York and AMS-IX Bay Area – further developing its global mission of providing simplified network connectivity throughout the US  through affordable, neutral and distributed Internet Exchange solutions.  Today, the 12 AMS-IX Points od Presence (PoPs) in Amsterdam serve nearly 700 connected parties throughout the region and boast a peak Internet traffic rate of almost 3 Tb/s, making it the world’s largest Internet connectivity hub.

AMS-IX’s subsidiary, AMS-IX USA, Inc., established a critical partnership with Digital Realty, DuPont Fabros Technology, Inc., Sabey Data Centers and 325 Hudson in November 2013 to develop a distributed Internet Exchange known as AMS-IX New York.  Recently, AMS-IX also established a PoP in Digital Realty’s 365 Main Street facility in San Francisco.  Offering an affordable and highly distributed Internet Exchange platform, AMS-IX Bay Area provides exceptional value to the interconnectivity market in the San Francisco / San Jose region.  AMS-IX enables companies to meet their goals through the generation of a regional hub distributed over multiple colocation facilities, as well as through offering increasingly reliable, effective and simplified exchange of Internet traffic.

AMS-IX’s partnership with Digital Realty will also offer business customers unique access to a more secure, faster and cost-effective Internet peering service structure within Digital Realty’s stable, reliable interconnection facility in downtown  San Francisco.  By peering directly with AMS-IX Bay Area, Digital Realty data center clients can experience benefits including improved connectivity and extended reach as well as reduced latency and transit costs.  AMS-IX’s globally proven model is expected to stimulate economic growth and job cultivation throughout the San Francisco Bay Area, attracting new businesses into the region.

AMS-IX  remains dedicated to the Open-IX Association’s mission of developing of criteria and methods of measurement for data transfer and physical connectivity, reducing the complexity that restricts interconnection in fragmented markets in order to ultimately improve IX performance throughout the world.  Similar to its counterpart in New York, AMS-IX’s newest Bay Area Internet Exchange PoP will be OIX-1 (IXP Technical Standards) compliant, ensuring high quality technical and operational standards for Internet Exchanges.  With its strong commitment to the OIX mission, AMS-IX recognizes the potential value in developing uniform and low-cost standards of global connectivity performance.

Its steadfast dedication to these standards, coupled with its strategic partnership with Digital Realty, has positioned AMS-IX USA as a critical influencer in the US Internet Exchange market.  Encouraging the development of a European model of neutral and distributed Internet Exchanges to reduce IP interconnection and associated costs, AMS-IX will leave users with a cost-effective and transparent network connectivity structure, generating interconnectivity that is faster, simpler and more effective than ever before.

For further information about AMS-IX and its mission for neutral and distributed Internet Exchanges, visit www.ams-ix.net and https://bay.ams-ix.net.

 Long-Term Benefits of Proactive Data Center Planning

By: Joanna Styczen, Technical Writing Director, iMiller Public Relations

Failing to consider long-term needs during the planning stages of a data center project can result in disastrous consequences in the future.  It is absolutely critical to provide due diligence during the early developmental stages of infrastructure design to avoid potential issues down the road.  Proper planning and management can create an optimum environment encompassing faster, simpler and more efficient processes throughout the data center lifecycle.  Through meticulous custom design, customers can also benefit from ease of installation, growth anticipation, serviceability and flexibility for years to come.

Electronic Environments Corporation (EEC) leverages nearly three decades of experience in mission critical facility design and management to maximize the overall efficiency of clients’ facilities.

Hoping to educate the public using its extensive expertise, EEC has issued a position paper entitled “Key Considerations before Beginning a New Data Center Project”. This paper,written by EEC’s President of Mission Critical Construction Services Kevin O’Brien,offers customers insight into EEC’s unique, holistic view and forward-thinking process for data center design.  EEC helps develop all-inclusive, tested solutions to ensure optimum productivity throughout the entire lifecycle of mission critical facilities.  By proactively eliminating both common and complex issues alike, EEC assists companies in avoiding unnecessary expenses and ensuring continual uptime.

Without proper, experienced guidance and support, data center projects are often ill-conceived and leave companies struggling during inevitable future situations for the sake of lower costs during the early design process.  Real-life examples in “Key Considerations before Beginning a New Data Center Project” illustrate how spending a little extra time and money upfront enables a company to save time, money and manpower while maintaining a positive reputation in the long run.

To read EEC’s latest position paper, ‘Key Considerations before Beginning a New Data Center Project’, visit http://www.eecnet.com/key-considerations-before-beginning-a-new-data-center-project/.

For more information about how your company can appear in the Data Center Discovery global directory of data center solution providers please email becca@datacenterdiscovery.com

AMS-IX to Sponsor Capacity North America 2014

By Mark Cooper, CCO, AMS-IX

AMS-IX_US_logo_RGB_Transparent_onLightAs the world of interconnection continues to develop, the Netherlands headquartered Amsterdam Internet Exchange (AMS-IX) is bringing its proven, successful European business model of interconnectivity to the United States.  Following the recent launch of AMS-IX New York in four data centers in the New York/New Jersey metropolitan area, AMS-IX USA Inc. is working to establish more neutral, distributed Internet exchanges across the country.

AMS-IX is an Associate Sponsor of Capacity North America 2014 in San Francisco, September 11 and 12. During Capacity N.A., AMS-IX will demonstrate to conference attendees the European model of neutral and distributed Internet exchanges, which creates a digital infrastructure that provides uncomplicated, transparent and affordable interconnectivity to a wide array of businesses.  Leveraging its Open-IX certification, AMS-IX operates at the cutting edge of technology as the company continues its mission to provide superior Internet exchange solutions around the globe.

Capacity North America 2014 takes place in San Francisco, CA – the technological epicenter of the continent – for the first time ever.  With more than 420 leaders in wholesale telecommunications attending the event, the 13th annual Capacity N.A. will bring attendees deep into discussions and debates regarding current market trends and challenges faced by the telecom community – an endeavor necessary for its continued evolution and development.  Leveraging abundant networking opportunities with content providers, tech companies and application-based enterprises; Capacity N.A. paves the way for the future evolution of the North American telecom landscape and provides a platform for influential community members to gain the necessary insight and contacts to continue their technological development.

To learn more, come see AMS-IX at Capacity North America 2014 at booth #12 and visit us.ams-ix.net.

For more information about how your company can get listed in the Data Center Discovery global directory of data center solution providers please email becca@datacenterdiscovery.com

For more information about how Data Center Discovery can spread the news about your data center event please email becca@datacenterdiscovery.com

PEN 2.0: The Next Evolution of Cloud Network

By: Jon Vestal, Vice President, Product Architecture, Pacnet

Over the past 10 years, cloud development and bandwidth-hungry applications such as cloud storage operations, videoconferencing and video streaming have continued their rapid proliferation in the market, consuming capacity and leaving traditional networks struggling to keep pace. Their propagation has spurred a growing customer need for a burstable hybrid cloud solution for Disaster Recovery, Ecommerce as well as flexibility in moving workloads from one location to another.  Moreover, today’s organizations also need compute, connectivity and storage resources to be dynamically scaled up or down according to demand. They also require access to these tools and systems without time or location restrictions, while still being able to maintain the utmost security.

As a result, we launched our award-winning Pacnet Enabled Network (PEN), the first pan-Asian Network-as-a-Service (NaaS) architecture to power cloud deployments in February.  The platform has proved to be a tremendous solution for our customers, enabling them to solve complex network challenges and build high-performance, cost-effective, scalable and cloud-ready networks.  Due to strong market demand, the platform was also extended into the United States in addition to deployments in Australia, Hong Kong, Japan and Singapore.  It is this same demand, coupled with new challenges spurred by rapid market advancement, which serve as the driving force behind Pacnet’s commitment to continued innovation.

Keeping Up with Advancing Technology

When our customers spoke, we listened – a nuance that I believe makes Pacnet unique amongst other providers.  In direct response to customer feedback and needs, we will be introducing the next evolution of the PEN platform, what we call PEN 2.0, in mid-September.

PEN 2.0 widens the scope of SDN, allowing a broader set of services in traditional network topologies to be deployed and scaled on demand.  For example, the platform allows carrier customers to create an individual network topology at a very granular level, eliminating complexity to create a clear network infrastructure.  PEN 2.0 further enables hybrid cloud deployment for our customers, facilitating connectivity from enterprise-class data centers and private clouds to any external cloud vendors and allowing customers to burst workloads from one end to another through our on-demand bandwidth.  PEN 2.0 also boasts a series of new features.

With the release of the enhanced platform, Pacnet has become the first carrier to deploy Network Functions Virtualization (NFV) solutions in an OpenStack environment, including vFirewall and vRouter.  The integration of NFV into the OpenStack environment did not come without its challenges.  One issue faced by our development team was the inability to reuse network resources with the OpenStack networking module, Neutron.  For example, upon creating a virtual network, Neutron prevented the removal of a virtual appliance and its replacement with another virtual appliance while still leveraging the original physical infrastructure.  The issue was resolved when our developers accessed and modified the OpenStack source code to our specific needs – a key reason for Pacnet’s initial selection of OpenStack.

PEN 2.0 also features a Virtual Local Area Network (VLAN) that connects multiple endpoints for different application environments with varying characteristics to meet specific customer needs. The introduction of VLAN has allowed Pacnet to have a dedicated Ethernet connection directly to the Amazon Web Services (AWS) environment, leveraging the PEN 2.0 platform to extend these circuits anywhere across a customer’s network.

The third and final new feature of PEN 2.0 is the integration of approval chains to ensure secure access to the service platform.  The approval chain allows customers to define their own workflows for the PEN 2.0 environment through the workflow management tool.  Only a series of users or user groups who have been granted permission are allowed to approve workflow / environment modifications.

The Twin Technologies

While SDN separates network control from the physical infrastructure such as routers and switches to allow support of a network fabric across multi-vendor equipment, NFV focuses on virtualizing network components, or decoupling important network functions from physical devices.  Essentially, it is the delivery of a virtualized router within the cloud.

Together, SDN and NFV are synergistic – capable of orchestrating the delivery of a virtual appliance and the network with no human interaction as well as enabling self-provisioning end to end.  SNS Research estimates that the SDN, NFV and virtualization will account for nearly $4B in 2014 alone, growing at a CAGR of nearly 60 percent over the next six years. By 2020, SNS estimates that SDN and NFV will enable service providers to save up to $32B in annual CapEx.

Tell us your requirements and business objectives now, and let our mighty PEN help propel your business.