The company will enable automobile manufacturers and suppliers to build in-cabin and ADAS ML models with improved safety outcomes while protecting consumer privacy
Synthesis AI, a pioneer in synthetic data technologies for computer vision, today announced its enhanced capabilities to provide synthetic data for a wide range of Automotive and Autonomous Vehicle (AV) use cases, including Driver Monitoring Systems (DMS), Occupant Monitoring Systems (OMS), and Advanced Driver Assistance Systems (ADAS). Through Synthesis Humans and Synthesis Scenarios, an ecosystem of automotive OEMs, providers and suppliers can now simulate driver and occupant behavior for in-cabin car environments to build more capable perception systems. The company has also launched Diverse Human Drivers, an open dataset for driver monitoring Machine Learning (ML) model training, letting CV developers experiment with synthetics.
Prompted by increased demand to improve driver safety, and by the need to meet Euro NCAP and NHTSA regulatory requirements, DMS, OMS, and ADAS are becoming commonplace, as the technology can dramatically reduce collision rates resulting from distracted driving. However, car manufacturers face significant roadblocks due to the lack of real-world data to train, test and validate the development of these systems. There are significant hurdles with privacy and safety as developers require training data from dangerous scenarios (e.g., drowsiness, distracted drivers, etc.). Synthetic data will be essential in overcoming these challenges.
Synthesis AI’s enhanced automotive offerings will help car manufacturers, suppliers and the software companies that service them, achieve public safety, consumer privacy and regulatory goals. Manufacturers can now use synthetic data to build models for driver state assessment (e.g., sleepy), activity recognition (e.g., talking on the phone, hands off the wheel), experience personalization, occupant monitoring, driver-pedestrian interaction, and gaze estimation.
The latest product advancements build on the company’s history of helping ML practitioners in the automotive space develop performant models that produce outcomes as good as or better than those produced using real-world data exclusively – in a fraction of the time, at a fraction of the cost. In particular, the company’s recent product enhancements enable multi-passenger simulation for driver and occupant monitoring, an expansion of activity models to cover key use cases (e.g., falling asleep, talking/texting on the phone, eating/drinking, not wearing seat belt, etc.), broader support for RGB and NIR sensor systems, and the ability to model driver and occupant interactions with the vehicle.
“DMS, OMS, and ADAS require a detailed understanding of human behavior, state and the in-cabin environment,” said Yashar Behzadi, CEO and Founder of Synthesis AI. “The new synthetic data capabilities have been developed from working closely with leading companies in the space over the last three years. The advanced feature set enables companies to create targeted data for DMS and OMS applications driving safer and more robust systems.”
Synthesis AI works with automobile, AV manufacturers and tier-1 suppliers, continuing a rich history of supporting automotive ML engineering. By combining generative AI and cinematic CGI pipelines, the company’s latest platform enhancements build on a multi-year journey of expanding the use of synthetic data in the automotive industry.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!