register Presented by IEEE Santa Clara Valley chapter, Power & Energy Society / Industry Applications Society
October 20, 8:00am – 4:30pm, At the Delta Hotel, Santa Clara breakfast available 7:30 am
2151 Laurelwood Rd., Santa Clara, CA 95054
Admission: IEEE Members:  $225 ($200 Early Bird by 10/6)
Non-IEEE member:  $250 ($225 Early Bird by 10/6)>

Registration Link

https://events.vtools.ieee.org/event/register/372108

Learn from Nvidia, NASA, Vertiv, Cloudflare, Volt Server, and EdgeCloudLink why deployment of AI and High-Performance Computing in data centers are revolutionizing Data Center design, and how these and other industry leaders are solving these challenges.

Session 1 Keynote:  Achieving a Sustainable Future for Data Center Power. 
Attendees will learn why today’s IT infrastructure and data center power designs/architectures will not meet tomorrow’s rapidly changing power requirements. This session will address how the pandemic and aggressive ESG (Environmental, Social, and Governance) goals have accelerated the need for substantial change to current and future data center IT power architecture, design, and delivery. The presenter will explain why organizations will be challenged to develop an approach in response to increasing government regulation, corporate sustainability goals, increasing demand and reduced availability of traditional power. Many Data Center designers will be asked to address these challenges while reducing cost and increasing speed without sacrificing reliability.
The fast-paced AI Revolution will further complicate and require many new approaches to Data Center Power Requirements.  For example, we have found that increased use of liquid cooling for high end computer equipment can improve power usage efficiency.
Speaker: Peter Panfil, Vice President Power Solutions, Vertiv

Session 2:   Environmentally Friendly Supercomputing at NASA
At NASA, High-End Computing is an essential tool supporting science and engineering. The High-End Computing Capability (HECC) component of the High-End Computing portfolio provides the systems and services to facilitate discovery across all of NASA’s technical mission directorate.  HECC supports a broad base of users exploring a broad spectrum of challenges ranging from the design of fuel-efficient aircraft to providing decision makers with critical information on how policy changes could affect our global ecosystem.  The compute resources are significant, sometimes requiring more than 50kW per rack. This talk will discuss the HECC efforts to deploy systems that require substantial increases in power per rack to meet NASA’s requirements, while minimizing the impact on the environment.  NASA is working with computer vendors, exploring more efficient chip architectures and module manufacturers to continue to improve facility efficiencies.
Speaker: Willilam Thigpen, Director, High End Computing Center, NASA AMES

Session 3:  Advanced Data Center Cooling with Air-Liquid Hybrid Technology 
Higher density, Accelerated Compute and AI infrastructure for data centers are pushing the limits of power capacity and cooling technologies.  These challenges start at the chip and work their way up to the grid.  AI chips will need more advanced cooling technologies than used today for peak performance.  As the use of AI grows, data centers could consume a significant portion of global demand for electricity, so improved efficiency is essential to relieve pressure on the grid.
In this presentation we discuss the energy optimization opportunities of air-liquid hybrid cooling as compared to pure air cooling for data centers.  A gradual transition from 100% air cooling to 25%–75% air and liquid cooling has been studied to understand the changes in IT, fan, facility, and total data center power consumption.  Various system design optimizations such as supply air temperature (SAT), facility chiller water temperature, economization and secondary fluid temperature are considered to highlight the importance of proper setpoint conditions on both primary and secondary sides.  Computational fluid dynamics (CFD) and flow network modeling (FNM) are discussed which assess the performance of air and liquid cooling by evaluating the required flow rate, pressure drop, critical case temperature of computing components, and temperature change of the cooling medium.  Power usage effectiveness (PUE) will be compared with Total Usage Effectiveness (TUE) which appears to be a more suitable metric to weigh a data center’s design efficiency.   For the most optimized case, we can achieve up to a 27% lower consumption in facility power.  Therefore, increasing the percentage of liquid cooling contribution significantly diminishes power requirements, one of the most critical requirements of a sustainable design.
Speaker: Ali Heydari, Distinguished Engineer, NVIDIA
Speaker: Fred Rebarber, President, Norman S Wright

Session 4:   Upgrading existing data centers to Support AI
Power has been rapidly increasing in semiconductors over the past 10 years. Whether through accelerated solutions, larger storage densities, or even just faster CPUs we are seeing significant increases in power per device. But with this increase in power, we have not seen commensurate increases in facilities power, and the rise of AI is only exacerbating the problem. Some solutions we will discuss include changing form factors at the node level (2U instead of 1U), sharing ToRs over multiple racks, and more, but ultimately facilities teams will need to grow the power footprint of data center campuses. This power increase is happening while companies are also seeking to reduce emissions, and AI is making this even more complicated. AI isn’t a singular workload or even phase, for example training and inferencing have dramatically different computational footprints, latency, and power requirements. This implies that choosing the location of training vs. inferencing data centers will be more significant, and that other factors such as dark fiber colocation, overall data storage strategy, and green power availability will be increasingly important for companies in their selection of facilities sites.
Speaker: Rebecca Weekly, VP, Hardware Systems Engineering, Cloudflare

Session 5:   Digital Electricity:  Fault Managed Power  
The exponential growth of power and data demands driven by AI is putting tremendous pressure on operators to provide increased capacity economically, both reliably and safely.  Fault Managed Power (FMP) is a new rapidly installed, high density power distribution technology adopted by the National Electric Code in 2023, supported by two new UL safety standards.  FMP has the power capability of industrial AC, but for the first time in history, even at hundreds of volts, is not harmful when touched, and fire/arc safe even when not contained in conduit or bus work.  FMP conductors are uniquely qualified to operate in the same cable as data allowing the elimination of the entire overhead power distribution layer in the whitespace in exchange for a common, high density, data/power tray that can be installed and managed using the same IT skill sets currently used for Ethernet cabling.  Finally, FMP energy “packets” are data enabled, providing high resolution monitoring and fault diagnostics of power flow at millisecond time frames.   In summary, FMP offers the promise of a modernized electricity format to support the demands of AI.   FMP systems have been installed in over 1,000 large venues over the last 9 years, including stadiums, airports, smart hotels and indoor vertical farms.  Data centers are a new application area that can be addressed now that the technology has had years of field exposure and is supported by a dedicated National Electric Code article.
 Speaker: Stephen Eaves, CEO, Volt Server

Session 6:   Fully Sustainable Data Centers for AI workloads

Data centers require a large amount of power in urban locations where the utility grid is likely to be severely constrained.  At the same time, we need more sustainable, highly reliable sources of electricity for these mission critical facilities.  EdgeCloudLink is launching a novel data center-as-a-service offering which employs a high density, zero-emission, highly efficient, off-grid power architecture that is quickly deployed to meet the growing demand for data centers in support of AI applications.  This presentation will cover an overview of ECL’s design approach and techno-economic considerations.

Speaker: Rajesh Gopinath, Co-Founder, EdgeCloudLink

We look forward to having you join us October 20!

Regards,

Steve Jordan,
Chair, IEEE SCV PES/IAS

Event page with registration link: https://r6.ieee.org/scv-pesias/2023/09/15/october-20-powering-the-ai-data-center-revolution/

Registration Link  https://events.vtools.ieee.org/event/register/372108

Scheduled Clean Energy Event Lunch & Network
Delta Hotel, Santa Clara Map