Introducing a Python SDK that allows enterprises to effortlessly optimize their ML models for edge devices.
Edge Impulse, the leading edge AI platform, today announced Bring Your Own Model (BYOM), allowing AI teams to leverage their own bespoke ML models and optimize them for any edge device. To empower all ML engineers to quickly add real-time intelligence to their products, Edge Impulse is unveiling its Python SDK, allowing seamless integration of BYOM within existing developer environments. Teams can now access the power of Edge Impulse in minutes, with unprecedented ease.
Edge Impulse is known for its innovative tools that have greatly lowered the barrier to building edge AI solutions for digital health and industrial productivity. With their groundbreaking web-based Studio platform, engineers have been able to collect data, develop and tune ML models, and deploy them to devices. This has empowered teams to quickly create and optimize models and algorithms that run at peak performance on any edge device. With BYOM, users can now import their own trained models into Edge Impulse Studio, instead of having to use datasets to build original models inside the platform.
Coupled with BYOM, the new Python SDK streamlines workflows even further, letting ML teams leverage Edge Impulse directly from their own development environments. The SDK offers powerful and valuable capabilities that can be called with short, intuitive Python one-liners:
- Model profiling for any edge device — Calculates the on-device inference latency and RAM and ROM memory consumption for any trained model, allowing development teams to determine where and how their applications will run fastest and most efficiently. This helps them find the right balance between model optimizations and hardware capabilities.
- Model optimizations and C++ conversion — Tunes models with Edge Impulse Edge Optimized Neural (EON) Compiler and exports the model as a C++ library, the common format for deploying ML to edge devices.
“We’ve always been known for our fantastic user interface, but ML practitioners like us live in Python,” says Daniel Situnayake, Edge Impulse’s head of ML. “We sketch out ideas in notebooks, build data pipelines and training scripts, and integrate with a vibrant ecosystem of Python tools. The Edge Impulse SDK is designed to be one of them. It’s built to work seamlessly with the tools you already know. It feels familiar and obvious, but also magical.”
The BYOM launch also includes the release of the second version of the EON Compiler. This feature minimizes RAM and ROM usage for neural networks, allowing enterprises to do more AI on even the most constrained devices — and thereby build more cost-effective and efficient products. The improved EON Compiler enables enterprises to save over 70% RAM and 40% ROM compared to TensorFlow Lite for Microcontrollers (TFLM). In addition to improving the overall efficiencies of models, the EON Compiler also supports a wider range of built-in kernels than TFLM, as well as model types not available in TFLM such as transformers and classical machine learning models.
The new Edge Impulse tools offer enterprises and their ML development teams an incredible amount of power, and promises to become a valuable part of all ML engineers’ toolkits.
Praise
Edge Impulse and its new features are garnering accolades from industry leaders.
“At Weights & Biases, we have an ever-increasing user base of ML practitioners interested in solving problems at the edge. With the new Edge Impulse SDK, our users have a new option to explore different model architectures, track on-device model performance across their experiments, and obtain a library they can deploy and manage on the edge.” —Seann Gardiner, VP, Business Development & International, Weights & Biases
“Hyfe leverages Edge Impulse’s platform to deploy our market-leading AI model for cough detection to the edge, facilitating real-time and efficient monitoring of respiratory health.” —Paul Rieger, Co-Founder, Hyfe
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!