Robotics

Helm.ai Introduces VidGen-2

Helm.ai

Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced the launch of VidGen-2, its next-generation generative AI model for producing highly realistic driving video sequences. VidGen-2 offers 2X higher resolution than its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with 2X increased resolution per camera, providing automakers with a scalable and cost-effective solution for autonomous driving development and validation.

Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 leverages Helm.ai’s innovative generative deep neural network (DNN) architectures and Deep Teaching™, an efficient unsupervised training method. It generates highly realistic video sequences at 696 x 696 resolution, double that of VidGen-1, with frame rates ranging from 5 to 30 fps. The model also enhances 640 x 384 resolution video quality at 30 fps, delivering smoother and more detailed simulations. Videos can be generated by VidGen-2 without an input prompt or with a single image or input video as the prompt.

VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 (VGA) resolution for each. The model ensures self-consistency across all camera perspectives, providing accurate simulation for various sensor configurations.

The model generates driving scene videos across multiple geographies, camera types, and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion, but also learns and reproduces human-like driving behaviors, simulating the motions of the ego-vehicle and surrounding agents in accordance with traffic rules. It creates a wide range of scenarios, including highway and urban driving, multiple vehicle types, pedestrians, cyclists, intersections, turns, weather conditions, and lighting variations. In multi-camera mode, the scenes are generated consistently across all perspectives.

VidGen-2 gives automakers a significant scalability advantage over traditional non-AI simulators by enabling rapid asset generation and imbuing agents in simulations with sophisticated, real-life behaviors. Helm.ai’s approach not only reduces development time and cost but also closes the “sim-to-real” gap, offering a highly realistic and efficient solution that broadens the scope of simulation-based training and validation.

“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies,” said Vladislav Voroninski, Helm.ai’s CEO and founder. “These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks. The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Business Wire

Business Wire is a trusted source for news organizations, journalists, investment professionals and regulatory authorities, delivering news directly into editorial systems and leading online news sources via its multi-patented NX Network. Business Wire has 18 newsrooms worldwide to meet the needs of communications professionals and news media.

Related posts

Amazon Announces First Robotics Fulfillment Center in Louisiana

Business Wire

ACM Prize in Computing Recognizes Pioneer of Robot Learning

PR Newswire

IDTechEx Asks How Mobile Robotics Can Impact the Future Logistics

PR Newswire