Machine Learning

OmniML Secures $10 Million to Accelerate AI Computing on Edge Devices

GGV Capital Leads Seed Round, with Additional Investments from Qualcomm Ventures, Foothill Ventures and more

OmniML, a startup developing smaller and faster machine learning models, today announced $10 million in seed funding to accelerate the use of artificial intelligence (AI) on edge devices. GGV Capital led the round with additional investment by Qualcomm Ventures, Foothill Ventures, and a few other venture capital firms.

Founded by Dr. Song Han, MIT EECS professor and serial entrepreneur, Dr. Di Wu, former Facebook engineer, and Dr. Huizi Mao, co-inventor of the “deep compression” technology coming out of Stanford University, OmniML solves a fundamental mismatch between AI applications and edge hardware to make AI more accessible for everyone, not just data scientists and developers.

OmniML enables and empowers smaller, scalable machine learning (ML) models on edge devices to be more capable of performing AI inference at levels that are impossible today outside of data centers and cloud environments. OmniML’s approach has already achieved orders-of-magnitude improvements for many major ML tasks on edge devices.

“OmniML’s leading Neural Architecture Search based platform has the potential to disrupt AI model optimization by creating new models that are efficient to begin with, rather than just compressing models,” said Carlos Kokron, Vice President, Qualcomm Technologies Inc. and Managing Director, Qualcomm Ventures Americas. “Their solution offers enterprise customers the ability to build the best AI models for target hardware resulting in significant time and cost savings, as well as improved accuracy. We are excited to invest in OmniML to help make edge AI ubiquitous.”

OmniML’s breakthrough will accelerate the deployment of AI on the edge – particularly computer vision – by alleviating costly pain points often found between AI applications and the high demand they place on hardware. Developers will no longer have to optimize ML models manually for specific chips and devices, a fundamental change that will result in faster deployment of high-performance, hardware-aware AI that can run anywhere.

OmniML is working with customers in sectors such as smart cameras and autonomous driving to create AI-enabled advanced computer vision for improved security and real-time situational awareness. This technology, though, is broadly applicable—for instance, it can improve the retail customer experience and support safety and quality control detection for precision manufacturing.

“AI is so big today that edge devices aren’t equipped to handle its computational power,” said OmniML Co-Founder and CEO Di Wu, PhD. “That doesn’t have to be the case. Our ML model compression addresses the gap between AI applications and edge devices, increasing the devices’ potential and allowing for hardware-aware AI that is faster, more accurate, cost effective and easy to implement for anyone, on diverse hardware platforms.”

OmniML’s neural architecture search algorithm has been integrated by Amazon’s AutoGluon open source AutoML library and Meta’s PyTorch open source deep learning framework and has won multiple awards and recognitions, including:
– First place at the Sixth AI Driving Olympics at ICRA’21
– Multiple first place wins over the past three years across multiple Low-Power Computer Vision Challenge competitions at NeurIPS, ICCV, CVPR
– First place for 3D semantic segmentation, and many others.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

Related posts

Daniel Brousseau Joins ML-powered Medallia

Business Wire

Spire Global Announces Expanded Relationship with Sinay

Business Wire

2bcloud Expands U.S. Operations

Business Wire