HealthTech

NTT & Red Hat Fuel AI Analysis at the Edge with IOWN Technologies

Joint solution enables real-time AI analysis of massive data sets while reducing power consumption and latency

As part of the Innovative Optical and Wireless Network (IOWN) initiative, NTT Corporation (NTT) and Red Hat, Inc., in collaboration with NVIDIA and Fujitsu, have jointly developed a solution to enhance and extend the potential for real-time artificial intelligence (AI) data analysis at the edge. Using technologies developed by the IOWN Global Forum and built on the foundation of Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes, this solution has received an IOWN Global Forum’s Proof of Concept (PoC)1 2 recognition for its real world viability and use cases.

As AI, sensing technology and networking innovation continues to accelerate, using AI analysis to assess and triage input at the network’s edge will be critical, especially as data sources expand almost daily. Using AI analysis on a large scale, however, can be slow and complex, and can be associated with higher maintenance costs and software upkeep to onboard new AI models and additional hardware. With edge computing capabilities emerging in more remote locations, AI analysis can be placed closer to the sensors, reducing latency and increasing bandwidth.

This solution consists of the IOWN All-Photonics Network (APN) and data pipeline acceleration technologies in IOWN Data-Centric Infrastructure (DCI). NTT’s accelerated data pipeline for AI adopts Remote Direct Memory Access (RDMA) over APN to efficiently collect and process large amounts of sensor data at the edge. Container orchestration technology from Red Hat OpenShift3 provides greater flexibility to operate workloads within the accelerated data pipeline across geographically distributed and remote data centers. NTT and Red Hat have successfully demonstrated that this solution can effectively reduce power consumption while maintaining lower latency for real-time AI analysis at the edge.

The proof of concept evaluated a real-time AI analysis platform4 with Yokosuka City as the sensor installation base and Musashino City as the remote data center, both connected via APN. As a result, even when a large number of cameras were accommodated, the latency required to aggregate sensor data for AI analysis was reduced by 60% compared to conventional AI inference workloads. Additionally, the IOWN PoC testing demonstrated that the power consumption required for AI analysis for each camera at the edge could be reduced 40% from conventional technology. This real-time AI analysis platform allows the GPU to be scaled up to accommodate a larger number of cameras without the CPU becoming a bottleneck. According to a trial calculation, assuming that 1,000 cameras can be accommodated, it is expected that power consumption can be further reduced by 60%. The highlights of the proof of concept for this solution are as follows:

  • Accelerated data pipeline for AI inference, provided by NTT, utilizing RDMA over APN to directly fetch large-scale sensor data from local sites to the memory in an accelerator in a remote data center, reducing the protocol-handling overheads in the conventional network. It then completes data processing of AI inference within the accelerator with less CPU-controlling overheads, improving the power efficiency in AI inference.
  • Large-scale AI data analysis in real time, powered by Red Hat OpenShift, can support Kubernetes operators5 to minimize the complexity of implementing hardware-based accelerators (GPUs, DPUs, etc.), enabling improved flexibility and easier deployment across disaggregated sites, including remote data centers.
  • This PoC uses NVIDIA A100 Tensor Core GPUs and NVIDIA ConnectX-6 NICs for AI inference.

This solution helps set the stage for intelligent AI-enabled technologies that will help businesses sustainably scale. With this solution, organizations can benefit from:

  • Reduced overhead associated with collecting large amounts of data;
  • Enhanced data collection that can be shared between metropolitan areas and remote data centers for quicker AI analysis;
  • The ability to utilize locally available and potentially renewable energy, such as solar or wind;
  • Increased area management security with video cameras acting as sensor devices.

Learn more about this solution at the IOWN Global Forum session at MWC Barcelona scheduled for February 29, 2024.

Supporting Quotes

Chris Wright, chief technology officers and senior vice president of Global Engineering at Red Hat and board director of IOWN Global Forum
“Over the last few years, we’ve worked as part of IOWN Global Forum to set the stage for AI innovation powered by open source and deliver technologies that help us make smarter choices for the future. This is important and exciting work, and these results help prove that we can build AI-enabled solutions that are sustainable and innovative for businesses across the globe. With Red Hat OpenShift, we can help NTT provide large-scale AI data analysis in real time and without limitations.”

Katsuhiko Kawazoe, senior executive vice president of NTT and chairman of IOWN Global Forum
“The NTT Group, in great collaboration with partners, is accelerating the development of IOWN to achieve a sustainable society. This IOWN PoC is an important step forward toward green computing for AI, which supports collective intelligence of AI. We are further improving IOWN’s power efficiency by applying Photonics-Electronics Convergence technologies to a computing infrastructure. We aim to embody the sustainable future of net zero emissions with IOWN.”

Kenichi Sakai, senior vice president of Fujitsu LTD, Infrastructure System Business Unit
“We have been contributing to the realization of sustainable and smarter society by applying our server technologies including PRIMERGY CDI (Composable Disaggregated Infrastructure) which enables disaggregated computing. These PoC results show that IOWN’s feasibility has increased towards the commercialization in 2026 and that IOWN has a potential for AI applications. Fujitsu enables higher performance and power efficiency with the composability of PRIMERGY CDI and continues to contribute to the realization of IOWN computing infrastructure.”

Ronnie Vasishta, senior vice president of telecom, NVIDIA
“The demand for AI inferencing is growing, and telco edge has a pivotal role to play. NVIDIA has been collaborating with NTT and IOWN to combine the APN network with an accelerated data processing pipeline and AI, showcasing computer-vision and image-processing technology that’s both low latency and power efficient.”

1 Innovative Optical and Wireless Network Global Forum: https://iowngf.org/2 PoC Reference: Reference Implementation Model for the Area Management Security Use Case, August 2022. https://iowngf.org/wp-content/uploads/formidable/21/IOWN-GF-RD-RIM_for_AM-S_UC_PoC_Reference_1.0.pdf3 This demonstration uses Red Hat OpenShift 4.13 for container orchestration.
This PoC uses Fujitsu PRIMERGY RX2540 M7 with NVIDIA A100 Tensor Core GPUs and NVIDIA ConnectX-6 NICs for AI inference at the edge/suburban data centers. This PoC also utilizes NVIDIA’s libraries for data pipeline acceleration, such as NVIDIA Rivermax, nvJPEG, CV-CUDA, and the Unified Communication – XFramework.
5https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

Zelis announced a partnership with Rectangle Health

Business Wire

SOPHiA GENETICS Announces Instituto Mário Penna as New Customer

PR Newswire

Healthtech Company Docbot Closes Series A Financing

Business Wire