AMD will provide expertise with diverse compute engine capabilities and open software, to help drive leadership performance for accelerated workloads
AMD (NASDAQ: AMD) today announced it is joining the newly created PyTorch Foundation as a founding member. The foundation, which will be part of the non-profit Linux Foundation, will drive adoption of Artificial Intelligence (AI) tooling by fostering and sustaining an ecosystem of open source projects with PyTorch, the Machine Learning (ML) software framework originally created and fostered by Meta.
As a founding member, AMD joins others in the industry to prioritize the continued growth of PyTorch’s vibrant community. Supported by innovations such as the AMD ROCm™ open software platform, AMD Instinct™ accelerators, Adaptive SoCs and CPUs, AMD will help the PyTorch Foundation by working to democratize state-of-the-art tools, libraries and other components to make these ML innovations accessible to everyone.
“Open software is critical to advancing HPC, AI and ML research, and we’re ready to bring our experience with open software platforms and innovation to the PyTorch foundation,” said Brad McCredie, corporate vice president, Data Center and Accelerated Processing, AMD. “AMD Instinct accelerators and ROCm software power important HPC and ML sites around the world, from exascale supercomputers at research labs to major cloud deployments showcasing the convergence of HPC and AI/ML. Together with other foundation members, we will support the acceleration of science and research that can make a dramatic impact on the world.”
“We are excited to have AMD join the PyTorch Foundation and bring its extensive expertise in HPC, AI and ML to our members,” said Santosh Janardhan, VP, Infrastructure at Meta. “AMD has continued to support PyTorch with its integration on ROCm open software platform and has worked extensively with the open-source community and other foundation members to advance performance of ML and AI workloads. The collaborative support offered by AMD continues our engagement across broad industry initiatives for global impact.”
AMD, Advancing AI and ML
AMD is uniquely positioned with its broad product and software portfolio to help customers and partners develop and deploy applications with multiple forms of AI from the cloud and enterprise, to the edge and endpoints. With a diverse set of hardware including AMD Instinct and Alveo accelerators, adaptive SoCs and CPUs, AMD can support a wide variety of pervasive AI and ML models, from small edge points to large scale out training and inference workloads.
AMD also works extensively with the AI open community to promote and extend machine and deep learning capabilities and optimizations. Vitis AI provides a comprehensive AI inference development platform for AMD adaptive SoCs and Alveo data center accelerators. Vitis AI plugs into common software developer tools and utilizes a rich set of optimized open-source libraries to empower software developers with machine learning acceleration as part of their software code.
The ROCm™ open software platform is constantly evolving to meet the needs of the AI/ML and HPC community. With the latest release of ROCm 5.0, developers have access to turn-key AI framework containers on AMD Infinity Hub, advanced tools, streamlined installation, and can expect to experience reduced kernel launch times and faster application performance. As well, with the latest PyTorch 1.12 release, AMD ROCm™ support has moved from beta to stable. You can read more about that here.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!