Supermicro Founder and CEO Charles Liang Will be Joined by Jensen Huang, NVIDIA CEO, and other Industry Luminaries to Outline Developments to Accelerate Cloud, AI, Edge, and Storage Workloads, Investments to Drive Rack Scale Manufacturing, and Innovations to Reduce the Environmental Impact of Today’s Datacenters with Green Computing Technologies
Super Micro Computer, Inc. (Nasdaq: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to offer IT solutions for decreasing the environmental impact of today’s modern data center. Supermicro is advancing technology in critical areas such as product design, green computing, manufacturing, and rack scale integration which enables organizations to become productive and reduce energy consumption quickly.
“Our Green Computing focus enables Supermicro to design and manufacture state-of-the-art servers and storage systems with the latest CPU and GPU technologies from NVIDIA, Intel, and AMD that reduce power consumption,” said Charles Liang, president and CEO of Supermicro. “Our innovative rack scale liquid cooling option enables organizations to reduce data center power usage expenses by up to 40%. Our popular GPU Servers with the NVIDIA HGX H100 8-GPU server continue to be in demand for AI workloads. We are expanding our solution offerings with innovative servers that use the NVIDIA Grace CPU Superchip and are working closely with NVIDIA to bring energy-efficient servers to market for AI and other industries. Worldwide our manufacturing capacity is 4,000 racks today and more than 5,000 later this year.”
Supermicro has the most comprehensive portfolio to support AI workloads and other verticals. These innovative systems include single and dual-socket rack mount systems based on 4th Gen Intel Xeon Scalable processors and 4th Gen AMD EPYC processors in 1U, 2U, 4U, 5U, and 8U form factor supporting 1-10 GPUs as well the density-optimized SuperBlade® systems supporting 20 NVIDIA H100 GPUs in an 8U enclosure, and SuperEdge systems designed for IoT and edge environments. The newly announced E3.S Petascale storage systems offer significant performance, capacity, throughput, and endurance when training on very large AI datasets while keeping excellent power efficiencies.
A new product family built on the NVIDIA Grace CPU Superchip will be available soon. These new servers will each contain 144 cores with dual CPUs joined by a 900GB/sec connection, allowing for highly responsive AI applications and those requiring extremely low latency responses. With the CPU running at 500W TDP, this system will reduce energy consumption for cloud-native workloads and the next generation of AI applications.
For more information, please visit: https://www.supermicro.com/en/products/system/GPU/2U/ARS-221GL-NR
With AI applications proliferating, the demand for high end AI designed servers is increasing, which brings new challenges for system providers to incorporate the latest CPUs and GPUs. The most advanced Supermicro GPU server incorporates dual CPUs and up to eight NVIDIA HGX H100 GPUs, which are available with a liquid cooled option, reducing OPEX.
“NVIDIA is closely working with Supermicro to quickly bring innovations to new server designs to meet the needs of the most demanding customers,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With Supermicro’s servers powered by Grace CPU Superchips shipping shortly and H100 GPUs gaining traction around the world, we’re working together to bring AI to a wide range of markets and applications.”
To reduce the TCO for customers, Supermicro is endorsing the new NVIDIA MGX reference architecture that will result in over a hundred server configurations for a range of AI, HPC, and Omniverse applications. This modular reference architecture includes CPUs, GPUs, and DPUs and is designed for multiple generations of processors.
Supermicro will also incorporate the latest NVIDIA networking technology, the NVIDIA Spectrum™-X networking platform in a broad range of solutions. The platform is the first designed specifically to improve the performance and efficiency of Ethernet-based AI clouds. Spectrum-X is built on network innovations powered by the tight coupling of the NVIDIASpectrum-4 Ethernet switch plus NVIDIA BlueField®-3 data processing unit (DPU). This breakthrough technology achieves 1.7X better overall AI performance and energy efficiency, along with consistent, predictable performance in multi-tenant environments.
Green computing is critical for today’s data centers, which consume 1 – 1.5% of worldwide electricity demand. Supermicro’s complete rack scale liquid cooling solution significantly reduces the need for traditional cooling methods. With redundant and hot-swappable power supplies and pumps, entire racks of high-performing AI and HPC optimized servers can be cooled efficiently even during a power supply or pump failure. This solution also uses custom-designed cold plates for both CPUs and GPUs, which are more efficient at removing heat than traditional designs. Up to $10B in energy costs can be saved if data centers lower their PUE closer to 1.0 with Supermicro technology and do not have to build 30 fossil fuel power plants.
To learn more about Supermicro Liquid Cooling Solutions, please visit: www.supermicro.com/liquidcooling
The Supermicro Liquid Cooling Solution includes:
- CDU – the Cooling Distribution Unit, which circulates the liquid throughout the entire rack of servers.
- CDM – the Cooling Distribution Manifold delivers the cool liquid to each server and the return path.
- Cold Plates – attach directly to the CPUs or GPUs and are custom designed.
- Hoses/Connectors – for connecting the liquid from the server to the CDM with leakproof connectors.
Supermicro has qualified a number of servers from various product families with this state-of-the-art cooling solution. The server list includes the following:
- BigTwin®: 2U2N, 2U4N
- SuperBlade
- Hyper: 1U, 2U
- GPU Servers (PCIe and SXM)
- GrandTwinTM 4U8N, 4U4N
Rack scale integration is another core competency that data center operators are demanding. Faster time to productivity requires entire racks to be delivered to data centers, ready to go. Supermicro has the ability to deliver L11 and L12 clusters, thoroughly tested, including customer applications, and configured for large scale liquid cooling when required.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!