AITech Interview with Ryan Welsh, Chief Executive Officer of Kyndi

Ensure trust in GenAI with a seamless integration of LLMs, vector databases, and semantic data models. Deliver direct, accurate answers for immediate usability.

Can you provide a brief overview of Kyndi and its core mission in the AI space?

Kyndi’s mission is to eliminate the frustration and wasted time that all of us have experienced when searching for answers. According to reports, people spend 20 percent of their working hours looking for critical information they need to make important decisions, which is one day per week of lost time and opportunities. To address this pervasive problem, Kyndi has created the world’s first Generative AI (GenAI) Answer Engine, a solution which provides direct, accurate, and trusted answers to enterprise users. Kyndi generates the right answers from reliable enterprise content to avoid common hallucinations. Kyndi’s Answer Engine quickly and easily enables understanding of company- and domain-specific language and provides explainability in the results. Kyndi’s unique differentiator is our patented Cognitive Memory which limits the input to any GenAI to include only contextually relevant data. As part of our Answer Engine, Kyndi also provides comprehensive user analytics to help content and knowledge management teams understand the users’ needs and identify gaps in content. Content leaders can use our GenAI-powered, no-code tool to curate, validate, and manage knowledge bases easily and effectively, ensuring correct and relevant answers are always available to the end users.

What inspired you to become involved in AI and natural language processing?

Prior to starting Kyndi, I worked at an investment research firm conducting due diligence on investment opportunities and then at a consulting firm helping organizations who wanted to partner with the federal government. As part of my role, I was required to scrutinize a substantial volume of documents and realized how much time I was wasting reading information which was not important to my analysis. While assessing a few organizations that were developing AI for the federal government, I met several individuals who were leading experts in artificial intelligence, machine learning, and natural language processing. I became fascinated with the prospect of being able to assess vast amounts of documents rapidly while simultaneously developing more informed insight. When my boss agreed to fund my idea, I started Kyndi and haven’t looked back since.

Could you explain how Kyndi’s approach to AI differs from other platforms like ChatGPT?

ChatGPT is a large language model (LLM) that is trained on data from the internet. When you interact with it, ChatGPT is trying to predict what the next word (technically token) should be and generates new and novel text based on its understanding of the likelihood of a word occurring after the word it has just written. The problem is that it can sometimes hallucinate, meaning it creates false but confident sounding text and it isn’t always populated with new information. For instance, ChatGPT’s knowledge stops in September 2021 so it knows nothing about anything that happened since then.

This is not acceptable in most business settings. To solve this problem, the standard approach is to use an “external memory” in the form of a traditional search engine, vector database (VDB), or even a knowledge graph (KG). You can store up-to-date information in these storage and retrieval systems, acquire information in response to a user question, and then pass it to the LLM to synthesize the single best answer from the input text. It is similar to an individual copying and pasting a news article into ChatGPT and prompting ChatGPT to “summarize this article.”

The problem is that these traditional storage and retrieval systems weren’t designed specifically to work with LLMs. These systems return search result lists containing entire documents that can be pages long, and they then pass all these pages to an LLM. However, the LLM has a hard time cutting through all the noise to find the right answer in pages of text. There is research now showing an inverse relationship between the length of input text and quality of answer (more input text, the worse the answer out of the LLM), and a positive relationship between length of input text and likelihood of hallucination (more input text, the higher the likelihood of hallucination). So if the goal is to get an accurate, contextually relevant, and trustworthy answer, and these systems pass pages of documents to an LLM that cannot meet this requirement, is this really the right solution?
Ideally, you would have a system built from first principles where the “external memory” was designed specifically to work with LLMs which is what we have done here at Kyndi. Our Cognitive Memory is a storage and retrieval system that returns the smallest subset of meaningful information to a user’s question and then passes that to an LLM for synthesis into a direct, contextually relevant, and trustworthy answer. There are all types of benefits that come from this. When feeding a GenAI model with a handful of relevant excerpts rather than pages of results that may or may not be relevant, you get a more accurate answer as the output. Kyndi enables understanding of company- and domain-specific information in minutes without months of tedious data labeling and model training/retraining. Kyndi’s capability also provides explainability of results, i.e. the results provide links to the precise information used to generate a summary. Finally, Kyndi also enables security of proprietary information in that the platform can pair with any GenAI tool since it is only summarizing the targeted information that Kyndi provides.

The Kyndi Platform is designed to empower the Natural-Language-Enabled Enterprise. Can you elaborate on what this concept entails and how it benefits organizations?

A Natural- Language-Enabled Enterprise leverages natural language processing (NLP) and natural language understanding (NLU) technologies to enhance its operations, communication, and decision-making processes. These capabilities can be applied both internally for employees as well as for external stakeholders including customers.

While there are many potential applications of NLP and NLU today, the areas where this capability is most effective to deploy right now include Customer Support (providing round-the-clock customer assistance, answering frequently asked questions, and routing inquiries to the appropriate human agents when necessary), Search and Information Retrieval (improving the accuracy and relevance of search results within the organization’s knowledge bases and data repositories so employees too can benefit from the significant investments that enterprises have already made in knowledge), Employee Productivity (assisting employees in various tasks, from developing new and more relevant content internally- and externally-facing, scheduling meetings, and retrieving information from databases to ultimately increase productivity), and Market/Competitive Intelligence (making sense of the vast reams of data available online and in paid research reports to enable more informed and quicker assessments of different markets and product areas).

But again, GenAI can only enhance these experiences if it can address the problems of hallucinations, explainability, domain-specific information, and security.

It’s fascinating to hear about the U.S. Air Force leveraging the Kyndi Platform. Could you share a bit more about how the platform is assisting intelligence analysts in extracting insights from historical data?

The U.S. Air Force has amassed a substantial volume of intelligence and data in the form of reports collected over several decades, many of which are still classified. The challenge for any current intelligence analyst is that there is so much information available that it is highly probable that key pieces of information will be missed when they are needed the most. Kyndi enables intelligence analysts to ask full sentence questions and find the relevant and trusted information they seek in one click to optimize the chance that they will derive the most accurate insight for the warfighter, whenever they need it.

What challenges did Kyndi overcome while adapting its technology to suit the needs of the U.S. Air Force?

The major challenges for any NLP or LLM use case are accuracy, understanding domain-specific terms in a timely manner, and explainability to enable the end user to easily find additional context for key pieces of information. Kyndi addresses all of these issues for organizations, including the U.S. Air Force. First, Kyndi was used to tune the underlying LLM to recognize Air Force specific terminology including plane models, military jargon, and acronyms, which allowed us to understand the questions that would be asked by a warfighter. This explainability then allowed end users to quickly obtain additional context in the underlying information and further tune the model based on that information. In the final step, Kyndi conducted final tuning based on user feedback to ensure high precision results.

Kyndi was recognized by the World Economic Forum as a “Technology Pioneer.” How does it feel to receive such recognition for your innovation in the field of Neuro-Symbolic AI?

Kyndi is exceptionally proud and honored to be recognized by the World Economic Forum for our innovations in Neuro-Symbolic AI. Kyndi has already implemented Neuro-Symbolic AI in full production on operational customer applications despite the fact that most reputable analysts believe that it will be many years before vendors can deploy.

As an example, Gartner has indicated that this technology is more than 10 years away from reaching the peak of their Hype Cycle. Analysts are routinely surprised when they learn that Kyndi is in full production today with Neuro-Symbolic AI. Kyndi believes the hybrid Neuro-Symbolic AI approach will help democratize AI by reducing the need for time-consuming and expensive tasks such as data labeling, model training, and tuning, therefore accelerating time to production for AI projects.

Explainability is crucial in AI applications. How does Kyndi ensure that the answers provided by its platform are explainable and transparent to users?

Explainability is a key Kyndi differentiator and enterprise users generally view this capability as critical to their brand as well as necessary to meet regulatory requirements in certain industries like the pharmaceutical and financial services sectors.

Kyndi uniquely allows users to see the specific sentences that feed the resulting generated summary produced by GenAI. Additionally, we further enable them to click on each source link to get to the specific passage rather than just linking to the entire document, so they can read additional context directly. Since users can see the sources of every generated summary, they can gain trust in both the answers and the organization to provide relevant information. This capability directly contrasts with ChatGPT and other GenAI solutions, which do not provide any sources or have the ability to utilize only relevant information to generate summaries. While some vendors may technically provide visibility into the sources, there will be so many to consider that it would render the information impractical to use.

Generative AI and next-generation search are evolving rapidly. What trends do you foresee in this space over the next few years?

The key trend in the short term is that many organizations were initially swept up in the hype of GenAI and then witnessed issues such as inaccuracy via hallucinations, the difficulty in interpreting and incorporating domain-specific information, explainability, and security challenges with proprietary information.

The emerging trend that organizations are starting to understand is that the only way to enable trustworthy GenAI is to implement an elegant solution that combines LLMs, vector databases, semantic data models, and GenAI technologies seamlessly to deliver direct and accurate answers users can trust and use right away. As organizations realize that it is possible to leverage their trusted enterprise content today, they will deploy GenAI solutions sooner and with more confidence rather than continuing their wait-and-see stance.

How do you think Kyndi is positioned to adapt and thrive in the ever-changing landscape of AI and search technology?

Kyndi seems to be in the right place at the right time. ChatGPT has shown the world what is possible and opened a lot of eyes to new ways of doing business. But that doesn’t mean that all solutions are enterprise ready as OpenAI openly admits that it is inaccurate too often to be usable by organizations. Kyndi has been working on this problem for 8 years and has a production-ready solution that addresses the problems of hallucinations, adding domain-specific information, explainability, and security today.

In fact, Kyndi is one of a few vendors offering an end-to-end complete solution that integrates language embeddings, LLM, vector databases, semantic data models, and GenAI on the same platform, allowing enterprises to get to production 9x faster than other alternative approaches. As organizations compare Kyndi to other options, they are seeing that the possibilities suggested by the release of ChatGPT are actually achievable right now.

As we conclude, what advice would you give to aspiring entrepreneurs and AI enthusiasts who are looking to make a meaningful impact in the field?

I recommend they speak with business users since they are the ones who fund projects that become operational. Tools that require a lot of in-depth technical support might prove too expensive to generate a return on investment. Kyndi was fortunate to receive early feedback from prospective users who wanted a solution for their problem of finding answers in unstructured data without the need to label massive amounts of data and then pay expensive machine learning engineers to train and retrain models. We learned that abstracting as much of the underlying AI complexity away from end users would enable line of business to deploy the platform rapidly and successfully with minimal technical engagement after it is integrated.

Ryan Welsh

Chief Executive Officer of Kyndi

Ryan is the CEO and Founder of Kyndi, a global provider of the Generative AI Answer Engine for the Natural-Language-Enabled Enterprise, an AI-powered platform that empowers people to do their most meaningful work. Before founding Kyndi, Ryan was a Senior Associate at NextFED in Arlington, VA, the pre-eminent Deep Tech commercialization and M&A firm for the federal market. He worked with Los Alamos National Laboratory to launch startups based on technology developed at the lab. At NextFED, Ryan led the commercialization of technologies  including quantum cryptography, cyber, small satellites, and artificial intelligence.

Related posts

AITech Interview with Jeffrey Moss, CEO at Ascential Technologies

AI TechPark

AITech Interview with Chris Lynch, Executive Chairman, and Chief Executive Officer of AtScale

AI TechPark

AITech Interview with Belsasar Lepe, Co-Founder & CEO at Cerby

AI TechPark