Austin, Texas (May 22, 2018) – The terms AI (artificial intelligence) and ML (machine learning) have become common technology buzzwords. Simply defined, AI is applying traits of human intelligence to computers. ML is a subset of AI, where inputs are mapped to outputs to derive meaningful patterns. Businesses and analysts alike are touting how transformative AI and ML can be, fueled by massive amounts of data from the always increasing IoT (internet of things) ecosystem. While other buzzwords have come and gone without much of a tangible impact, AI doesn’t seem to fit into that category. AI is reaching far and wide, from manufacturing floors to supply chain management, and even to the operation of data centers themselves. Many enterprises are still planning and piloting different AI applications to understand how it can transform their businesses, but data centers are already providing an early use case of a successful AI application.
Google use case
Starting in 2015, Google began applying ML in its data centers to help them operate more efficiently. Devan Adams, Senior Analyst on the IHS Markit Cloud and Data Center Research Practice outlines some of the practices and results, “Google’s DeepMind researchers and its DC team began by taking historical data collected by thousands of sensors within its DCs, including temperature, power, water pump speeds, and set-points, then analyzed the data within deep neural networks to focus on lowering PUE (Power Usage Effectiveness), the ratio of total building energy usage to IT energy usage. In the end, they achieved up to 40% energy reduction for cooling and 15% reduction in overall PUE, the lowest the site had ever experienced. Google plans to roll out its system and share more details about it in an official publication so other DC and industrial system operators can benefit from its results.”
This Google use case is a unique and relatively mature example of ML in data center operations. Google’s existing cloud infrastructure, access to large amounts of data and significant in house expertise allowed for Google to become an early adopter of ML. For enterprises and colocation data centers operators who lack those benefits, deploying ML in their data centers may seem like a daunting task, with significant cost and knowledge barriers to overcome. However, data center infrastructure suppliers are stepping in to bring ML integrated cooling, power, and remote management capabilities to these data centers.
Data center cooling
Cooling has become the primary place to start applying ML to data center infrastructure. The reason for this is cooling consumes around 25% of power a data center uses. Therefore, improving cooling efficiency translates into serious savings, but this isn’t an easy task. Data centers are dynamic environments, with changing IT loads, fluctuating internal and external temperatures, variable fan and pump speeds and different sensor locations. DCIM (data center infrastructure management) tools have been helpful in collecting, managing and providing data visualizations, but are still only tools to inform human operated data center decisions. Even with all the data one could need, humans are prone to errors. ML brings data inspired, real time automation to cooling, and the benefits are clear, according to vXchange colocation provider Chief Marketing and Business Development Officer, Ernest Sampera, “Data center cooling integrated with machine learning is a no brainer – it has helped us reduce human errors, and overall power and costs associated with cooling. Our on-site technicians can spend more time on our customers instead of being focused on tuning cooling efficiencies.”
Power and energy storage
Power distribution and backup power systems (UPS’) have limited AI in them today. They currently can utilize firmware to make basic decisions based on a sensor’s input and pre-programmed, desired outputs. However, they are not programmed to learn from changing inputs and outputs as cooling integrated with ML currently is. For UPS’, integrating ML has a different end goal than cooling. UPS’ integrated with ML will be focused on preventing downtime, through predicting failures and preventative maintenance, either self-performed or by alerting engineers of a specific problem. There is also likely to be some minor efficiency gains in UPS’ based on ML automation. Where AI and ML is not currently integrated is backup energy storage. That story, a much more interesting one, beings with lithium-ion batteries.
Lithium-ion batteries continue to see growing adoption in data centers. Prices for lithium-ion batteries continue to decrease and concerns over safety issues are easing, the argument to keep using VRLA (valve regulated lead acid) batteries is becoming more difficult. As lithium-ion batteries grow in data center applications, this opens new possibilities due to the nature of their chemistry allowing for more frequent charge cycles, healthy operation below full charge, no coup de fouet effect and no need for a controlled battery room environment.
The combination of lithium-ion battery benefits and ML capabilities has led to CUI Inc and VPS (Virtual Power Systems) integrating the two into a new energy storage solution. “Whether a data center is at the edge, a colocation, or large hyperscale, they all have power infrastructure optimization issues.” Mark Adams, Senior Vice President of CUI Inc explains. “With Software Defined Power, CUI Inc and Virtual Power Systems provides a dynamically intelligent solution to capture currently overprovisioned infrastructures, and gain greater capitalization of the footprint. Machine learning algorithms allow for the solution to analyze, predict, and react to the changes through a combination of hardware energy storage and software.”
Distributed IT architectures: Edge and Hybrid data center deployments
The next generation of data center deployments is coming in a more distributed approach, often utilizing a combination of cloud and colocation services, on-premises compute and edge data center deployments. This will commonly require several “lights-out” data centers, needing to be managed and operated remotely. ML will play a critical role in the automation and management of these data centers. Specifically, ML will enable these “lights-out” data centers to run efficiently, with predicative failure and preventative maintenance alerts to reduce downtime, while also reducing the resources required to manage such a footprint.
Artificial intelligence and machine learning is touching many aspects of the data center. It is bringing efficiency gains, increased reliability and automation to data center physical infrastructure. On top of that, it will allow for significantly improved remote management of distributed data center footprints. However, while AI and ML will revolutionize how a data center is operated, and could even enable fully autonomous operation of data centers, a human presence in data centers will remain critical in data center operations over the next decade.
The human ability to hear, smell, and possess a general intuition for when something is wrong, is yet to be matched by current AI and ML capabilities. So, while it may be time to start bringing ML based infrastructure into the data centers, it’s not time to start shipping the humans out.
For more information, please contact:
Senior Research Analyst, Cloud and Data Centers
+1 512 813 6290