OmniML in collaboration with Intel has delivered hardware-efficient AI on the latest 4th Gen Intel Xeon processor family to speed up language model performance drastically
OmniML, an enterprise artificial intelligence (AI) software company, today announced a new strategic partnership with Intel to accelerate the development and deployment of AI applications for enterprises of all sizes. The two companies will collaborate on community and customer growth opportunities via the Intel Disruptor Initiative to provide greater access to OmniML’s pioneering software platform, Omnimizer®.
OmniML’s software platform: Unlocking the true potential of AI on Intel hardware
The usage of AI has now become ingrained in many people’s lives from helping us drive safer, automating mundane tasks, and providing better security. However, getting responsible, accurate, and efficient applications to work in production is still a major challenge for most organizations. One of the major reasons is the increasingly large gaps between machine learning (ML) model training and ML model inferencing, making it difficult to design models that fully utilize the available resources on inference hardware.
To get all the components running smoothly, the ML model design and underlying hardware need to work in sync to deliver superior performance. OmniML and Intel have teamed up to bridge the dividing gap between model training and inferencing by incorporating hardware-efficient AI development from the outset.
To kick off this collaboration, OmniML demonstrated superior performance for one of the most popular language models on Intel platforms. OmniML, using its Omnimizer platform and 4th Gen Intel Xeon Scalable processors and integrated acceleration via Intel Advanced Matrix Extensions (Intel AMX) technology, achieved over 10x speedup in processing words per second over a multi-language DistilBERT1.
“Intel is one of the most forward-looking semiconductor companies in the world. OmniML’s strengths lie in our deep understanding of ML model design, optimization, and hardware-aware deployment approach. By bringing together OmniML’s Omnimizer ML platform to work in sync with the latest Intel Xeon processor, we have achieved truly amazing performance results starting with DistilBERT and expanding to larger language models shortly.” – Di Wu, OmniML Co-Founder, and CEO.
“By collaborating with OmniML, we bring together their expertise in ML model design and optimization with Intel’s pioneering processor technology,” said Arijit Bandyopadhyay, CTO – Enterprise Analytics & AI, Head of Strategy – Enterprise & Cloud, Data Platforms Group at Intel Corporation. “Utilizing the AI features built into the new 4th Gen Intel Xeon Scalable processor, OmniML can offer amazing AI performances to help organizations deliver reliable and leading edge products. We are excited about this collaboration and how we can help more customers accelerate the adoption of AI technology.”
Unlock Hardware-Efficient AI with Ease
Omnimizer® is an ML platform that facilitates and automates machine learning (ML) model design, training, and deployment. It unifies the ML development and deployment workflows to help users identify design flaws and performance bottlenecks to get models into production faster with superior runtime. Omnimizer provides a cloud-native interface to rapidly profile and visualize ML model performance on Intel and other hardware devices to ensure the model is properly adapted to run efficiently. Omnimizer has demonstrated a significant performance boost for many applications in computer vision and natural language processing for multinational corporations and fast-growing start-up companies.
Natural language processing and ways to improve language models’ performance are among the most important areas of AI applications. Everyday devices now incorporate AI-based language models as a core feature of their design to provide human-centric and multi-lingual interactions. Many of these language models are based on transformer architecture. Using Omnimizer to increase the efficiency of transformer-based language models opens up a wide range of use cases that weren’t possible before and lowers the total cost of ownership when utilizing language models for both on-device AI and cloud inferencing.
The OmniML and Intel collaboration builds upon the strengths of each company that creates a winning combination with OmniML’s software-based development platform on top of the latest generation of the Intel Xeon processor family.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!