Hume AI announces $50 Mn Fundraise and Empathic Voice Interface

Hume AI launches world’s first Empathic Voice Interface, enabling developers to integrate an emotionally intelligent AI voice into applications across health and wellness, AR/VR, customer service call centers, healthcare and more – with a few lines of code.

Hume AI (“Hume” or the “Company”), a startup and research lab building artificial intelligence optimized for human well-being, today announced it has raised a $50M Series B. The round, led by EQT Ventures, will support the debut and continued development of Hume’s new flagship product: an emotionally intelligent voice interface that can be built into any application. Union Square Ventures, Nat Friedman & Daniel Gross, Metaplanet, Northwell Holdings, Comcast Ventures, and LG Technology Ventures also participated in the round.

Hume AI was founded by Dr. Alan Cowen, a former Google researcher and scientist best known for pioneering semantic space theory – a computational approach to understanding emotional experience and expression which has revealed nuances of the voice, face, and gesture that are now understood to be central to human communication globally. The Company, which operates at the intersection of artificial intelligence, human behavior, and health and well-being, has created an advanced API toolkit for measuring human emotional expression that is already used in industries spanning from robotics to customer service, healthcare, health and wellness, user research, and more.

In connection with the fundraise, Hume AI has released a beta version of its flagship product, an Empathic Voice Interface (EVI). The emotionally intelligent conversational AI is the first to be trained on data from millions of human interactions to understand when users are finished speaking, predict their preferences, and generate vocal responses optimized for user satisfaction over time. These capabilities will be available to developers with just a few lines of code and can be built into any application.

AI voice products have the ability to revolutionize our interaction with technology; however; the stilted, mechanical nature of their responses is a barrier to truly immersive conversational experiences. The goal with Hume-EVI is to provide the basis for engaging voice-first experiences that emulate the natural speech patterns of human conversation.

Hume’s EVI is built on a new form of multimodal generative AI that integrates large language models (LLMs) with expression measures, which Hume refers to as an empathic large language model (eLLM). The company’s eLLM enables EVI to adjust the words it uses and its tone of voice based on the context and the user’s emotional expressions. EVI also accurately detects when a user is ending their conversational turn to start speaking, stops speaking when the user interrupts the AI, and generates rapid responses in real-time with latency under 700 ms – allowing for fluid, near-human-level conversation. With a single API call, developers can integrate EVI into any application to create state-of-the-art voice AI experiences.

“Hume’s empathic models are the crucial missing ingredient we’ve been looking for in the AI space,” said Ted Persson, Partner at EQT Ventures who led the investment. “We believe that Hume is building the foundational technology needed to create AI that truly understands our wants and needs, and are particularly excited by Hume’s plan to deploy it as a universal interface.”

“The main limitation of current AI systems is that they’re guided by superficial human ratings and instructions, which are error-prone and fail to tap into AI’s vast potential to come up with new ways to make people happy,” Hume AI founder Alan Cowen explained. “By building AI that learns directly from proxies of human happiness, we’re effectively teaching it to reconstruct human preferences from first principles and then update that knowledge with every new person it talks to and every new application it’s embedded in.”

“What sets Hume AI apart is the scientific rigor and unprecedented data quality underpinning their technologies,” said Andy Weissman, managing partner at Union Square Ventures. “Hume AI’s toolkit supports an exceptionally wide range of applications, from customer service to improving the accuracy of medical diagnoses and patient care, as Hume AI’s collaborations with Softbank,, and researchers at Harvard and Mt. Sinai have demonstrated.”

The growing Hume AI team currently comprises 35 leading researchers, engineers, and scientists advancing Dr. Cowen’s work on semantic space theory. His research, which has been presented in numerous leading journals including Nature, Nature Human Behavior, and Trends in Cognitive Sciences, involves the widest range and most diverse samples of emotions ever studied and informs Hume’s data-driven approach to creating more empathic AI tools. Hume’s technology leverages these research advances to learn from the tune, rhythm, and timbre of human speech, “umms” and “ahhs” and laughs and sighs, and nonverbal signals to improve human-computer interactions.

“Alan Cowen’s research has transformed our understanding of the rich languages of emotional expression in the voice, face, body, and gesture,” said Dacher Keltner, a leading emotion scientist. “His work has opened up entire fields of inquiry into understanding the emotional richness of the voice and the subtleties of facial expression.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

SignalFire Announces $900M+ in Funding and Launch of XIR Program

Business Wire Raises $6M, Expanding First-ever Developer Framework

PR Newswire

Juniper Networks announces Apstra Freeform

Business Wire