AI-Tech Interview with Leslie Kanthan, Chief Executive Officer and Founder at TurinTech AI

Learn about code optimization and its significance in modern business.


Leslie, can you please introduce yourself and share your experience as a CEO and Founder at TurinTech?

As you say, I’m the CEO and co-founder at TurinTech AI. Before TurinTech came into being, I worked for a range of financial institutions, including Credit Suisse and Bank of America. I met the other co-founders of TurinTech while completing my Ph.D. in Computer Science at University College London. I have a special interest in graph theory, quantitative research, and efficient similarity search techniques.

While in our respective financial jobs, we became frustrated with the manual machine learning development and code optimization processes in place. There was a real gap in the market for something better. So, in 2018, we founded TurinTech to develop our very own AI code optimization platform.

When I became CEO, I had to carry out a lot of non-technical and non-research-based work alongside the scientific work I’m accustomed to. Much of the job comes down to managing people and expectations, meaning I have to take on a variety of different areas. For instance, as well as overseeing the research side of things, I also have to understand the different management roles, know the financials, and be across all of our clients and stakeholders.

One thing I have learned in particular as a CEO is to run the company as horizontally as possible. This means creating an environment where people feel comfortable coming to me with any concerns or recommendations they have. This is really valuable for helping to guide my decisions, as I can use all the intel I am receiving from the ground up.

To set the stage, could you provide a brief overview of what code optimization means in the context of AI and its significance in modern businesses?

Code optimization refers to the process of refining and improving the underlying source code to make AI and software systems run more efficiently and effectively. It’s a critical aspect of enhancing code performance for scalability, profitability, and sustainability.

The significance of code optimization in modern businesses cannot be overstated. As businesses increasingly rely on AI, and more recently, on compute-intensive Generative AI, for various applications — ranging from data analysis to customer service — the performance of these AI systems becomes paramount.

Code optimization directly contributes to this performance by speeding up execution time and minimizing compute costs, which are crucial for business competitiveness and innovation.

For example, recent TurinTech research found that code optimization can lead to substantial improvements in execution times for machine learning codebases — up to around 20% in some cases. This not only boosts the efficiency of AI operations but also brings considerable cost savings. In the research, optimized code in an Azure-based cloud environment resulted in about a 30% cost reduction per hour for the utilized virtual machine size.

Code optimization in AI is all about maximizing results while minimizing inefficiencies and operational costs. It’s a key factor in driving the success and sustainability of AI initiatives in the dynamic and competitive landscape of modern businesses.

Code Optimization:

What are some common challenges and issues businesses face with code optimization when implementing AI solutions?

Businesses implementing AI solutions often encounter several challenges with code optimization, mainly due to the dynamic and complex nature of AI systems compared to traditional software optimization. Achieving optimal AI performance requires a delicate balance between code, model, and data, making the process intricate and multifaceted. This complexity is compounded by the need for continuous adaptation of AI systems, as they require constant updating to stay relevant and effective in changing environments.

A significant challenge is the scarcity of skilled performance engineers, who are both rare and expensive. In cities like London, costs can reach up to £500k per year, making expertise a luxury for many smaller companies.

Furthermore, the optimization process is time- and effort-intensive, particularly in large codebases. It involves an iterative cycle of fine-tuning and analysis, demanding considerable time even for experienced engineers. Large codebases amplify this challenge, requiring significant manpower and extended time frames for new teams to contribute effectively.

These challenges highlight the necessity for better tools to make code optimization more accessible and manageable for a wider range of businesses.

Could you share some examples of the tangible benefits businesses can achieve through effective code optimization in AI applications?

AI applications are subject to change along three axes: model, code, and data. At TurinTech, our evoML platform enables users to generate and optimize efficient ML code. Meanwhile, our GenAI-powered code optimization platform, Artemis AI, can optimize more generic application code. Together, these two products help businesses significantly enhance cost-efficiency in AI applications.

At the model level, different frameworks or libraries can be used to improve model efficiency without sacrificing accuracy. However, transitioning an ML model to a different format is complex and typically requires manual conversion by developers who are experts in these frameworks.

At TurinTech AI, we provide advanced functionalities for converting existing ML models into more efficient frameworks or libraries, resulting in substantial cost savings when deploying AI pipelines.

One of our competitive advantages is our ability to optimize both the model code and the application code. Inefficient code execution, which consumes excess memory, energy, and time, can be a hidden cost in deploying AI systems. Code optimization, often overlooked, is crucial for creating high-quality, efficient codebases. Our automated code optimization features can identify and optimize the most resource-intensive lines of code, thereby reducing the costs of executing AI applications.

Our research at TurinTech has shown that code optimization can improve the execution time of specific ML codebases by up to around 20%. When this optimized code was tested in an Azure-based cloud environment, we observed cost savings of about 30% per hour for the virtual machine size used. This highlights the significant impact of optimizing both the model and code levels in AI applications.

Are there any best practices or strategies that you recommend for businesses to improve their code optimization processes in AI development?

Code optimization leads to more efficient, greener, and cost-effective AI. Without proper optimization, AI can become expensive and challenging to scale.

Before embarking on code optimization, it’s crucial to align the process with your business objectives. This alignment involves translating your main goals into tangible performance metrics, such as reduced inference time and lower carbon emissions.

Empowering AI developers with advanced tools can automate and streamline the code optimization process, transforming what can be a lengthy and complex task into a more manageable one. This enables developers to focus on more innovative tasks.

In AI development, staying updated with AI technologies and trends is crucial, particularly by adopting a modular tech stack. This approach not only ensures efficient code optimization but also prepares AI systems for future technological advancements.

Finally, adopting eco-friendly optimization practices is more than a cost-saving measure; it’s a commitment to sustainability. Efficient code not only reduces operational costs but also lessens the environmental impact. By focusing on greener AI, businesses can contribute to a more sustainable future while reaping the benefits of efficient code.

Generative AI and Its Impact:

Generative AI has been a hot topic in the industry. Could you explain what generative AI is and how it’s affecting businesses and technology development?

Generative AI, a branch of artificial intelligence, excels in creating new content, such as text, images, code, video, and music, by learning from existing datasets and recognizing patterns.

Its swift adoption is ushering in a transformative era for businesses and technology development. McKinsey’s research underscores the significant economic potential of Generative AI, estimating it could contribute up to $4.4 trillion annually to the global economy, primarily through productivity enhancements.

This impact is particularly pronounced in sectors like banking, technology, retail, and healthcare. The high-tech and banking sectors, in particular, stand to benefit significantly. Generative AI is poised to accelerate software development, revolutionizing these industries with increased efficiency and innovative capabilities. We have observed strong interest from these two sectors in leveraging our code optimization technology to develop high-performance applications, reduce costs, and cut carbon emissions.

Are there any notable applications of generative AI that you find particularly promising or revolutionary for businesses?

Generative AI presents significant opportunities for businesses across various domains, notably in marketing, sales, software engineering, and research and development. According to McKinsey, these areas account for approximately 75% of generative AI’s total annual value.

One of the standout areas of generative AI application is in data-driven decision-making, particularly through the use of Large Language Models (LLMs). LLMs excel in analyzing a wide array of data sources and streamlining regulatory tasks via advanced document analysis. Their ability to process and extract insights from unstructured text data is particularly valuable. In the financial sector, for instance, LLMs enable companies to tap into previously underutilized data sources like news reports, social media content, and publications, opening new avenues for data analysis and insight generation.

The impact of generative AI is also profoundly felt in software engineering, a critical field across all industries. The potential for productivity improvements here is especially notable in sectors like finance and high-tech. An interesting trend in 2023 is the growing adoption of AI coding tools by traditionally conservative buyers in software, such as major banks including Citibank, JPMorgan Chase, and Goldman Sachs. This shift indicates a broader acceptance and integration of AI tools in areas where they can bring about substantial efficiency and innovation.

How can businesses harness the potential of generative AI while addressing potential ethical concerns and biases?

The principles of ethical practice and safety should be at the heart of implementing and using generative AI. Our core ethos is the belief that AI must be secure, reliable, and efficient. This means ensuring that our products, including evoML and Artemis AI, which utilize generative AI, are carefully crafted, maintained, and tested to confirm that they perform as intended.

There is a pressing need for AI systems to be free of bias, including biases present in the real world. Therefore, businesses must ensure their generative AI algorithms are optimized not only for performance but also for fairness and impartiality. Code optimization plays a crucial role in identifying and mitigating biases that might be inherent in the training data and reduces the likelihood of these biases being perpetuated in the AI’s outputs.

More broadly, businesses should adopt AI governance processes that include the continuous assessment of development methods and data and provide rigorous bias mitigation frameworks. They should scrutinize development decisions and document them in detail to ensure rigor and clarity in the decision-making process. This approach enables accountability and answerability.

Finally, this approach should be complemented by transparency and explainability. At TurinTech, for example, we ensure our decisions are transparent company-wide and also provide our users with the source code of the models developed using our platform. This empowers users and everyone involved to confidently use generative AI tools.

The Need for Sustainable AI:

Sustainable AI is becoming increasingly important. What are the environmental and ethical implications of AI development, and why is sustainability crucial in this context?

More than 1.3 million UK businesses are expected to use AI by 2040, and AI itself has a high carbon footprint. A University of Massachusetts Amherst study estimates that training a single Natural Language Processing (NLP) model can generate close to 300,000 kg of carbon emissions.

According to an MIT Technology Review article, this amount is “nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself).” With more companies deploying AI at scale, and in the context of the ongoing energy crisis, the energy efficiency and environmental impact of AI are becoming more crucial than ever before.

Some companies are starting to optimize their existing AI and code repositories using AI-powered code optimization techniques to address energy use and carbon emission concerns before deploying a machine learning model. However, most regional government policies have yet to significantly address the profound environmental impact of AI. Governments around the world need to emphasize the need for sustainable AI practices before it causes further harm to our environment.

Can you share some insights into how businesses can achieve sustainable AI development without compromising on performance and innovation?

Sustainable AI development, where businesses maintain high performance and innovation while minimizing environmental impact, presents a multifaceted challenge. To achieve this balance, businesses can adopt several strategies.

Firstly, AI efficiency is key. By optimizing AI algorithms and code, businesses can reduce the computational power and energy required for AI operations. This not only cuts down on energy consumption and associated carbon emissions but also ensures that AI systems remain high-performing and cost-effective.

In terms of data management, employing strategies like data minimization and efficient data processing can help reduce the environmental impact. By using only the data necessary for specific AI tasks, companies can lower their storage and processing requirements.

Lastly, collaboration and knowledge sharing in the field of sustainable AI can spur innovation and performance. Businesses can find novel ways to develop AI sustainably without compromising on performance or innovation by working together, sharing best practices, and learning from each other.

What are some best practices or frameworks that you recommend for businesses aiming to integrate sustainable AI practices into their strategies?

Creating and adopting energy-efficient AI models is particularly necessary for data centers. While this is often overlooked by data centers, using code optimization means that traditional, energy-intensive software and data processing tasks will consume significantly less power.

I would then recommend using frameworks such as a carbon footprint assessment to monitor current output and implement plans for reducing these levels. Finally, overseeing the lifecycle management of AI systems is crucial, from collecting data and creating models to scaling AI throughout the business.

Final Thoughts:

In your opinion, what key takeaways should business leaders keep in mind when considering the optimization of AI code and the future of AI in their organizations?

When considering the optimization of AI code and its future role in their organizations, business leaders should focus on several key aspects. Firstly, efficient and optimized AI code leads to better performance and effectiveness in AI systems, enhancing overall business operations and decision-making.

Cost-effectiveness is another crucial factor, as optimized code can significantly reduce the need for computational resources. This lowers operational costs, which becomes increasingly important as AI models grow in complexity and data requirements. Moreover, future-proofing an organization’s AI capabilities is essential in the rapidly evolving AI landscape, with code optimization ensuring that AI systems remain efficient and up-to-date.

With increasing regulatory scrutiny on AI practices, optimized code can help ensure compliance with evolving regulations, especially in meeting ESG (Environmental, Social, and Governance) compliance goals. It is a strategic imperative for business leaders, encompassing performance, cost, ethical practices, scalability, sustainability, future-readiness, and regulatory compliance.

As we conclude this interview, could you provide a glimpse into what excites you the most about the intersection of code optimization, AI, and sustainability in business and technology?

Definitely. I’m excited about sustainable innovation, particularly leveraging AI to optimize AI and code. This approach can really accelerate innovation with minimal environmental impact, tackling complex challenges sustainably. Generative AI, especially, can be resource-intensive, leading to a higher carbon footprint. Through code optimization, businesses can make their AI systems more energy-efficient.

Secondly, there’s the aspect of cost-efficient AI. Improved code efficiency and AI processes can lead to significant cost savings, encouraging wider adoption across diverse industries. Furthermore, optimized code runs more efficiently, resulting in faster processing times and more accurate results.

Do you have any final recommendations or advice for businesses looking to leverage AI optimally while remaining ethically and environmentally conscious?

I would say the key aspect to embody is continuous learning and adaptation. It’s vital to stay informed about the latest developments in AI and sustainability. Additionally, fostering a culture of continuous learning and adaptation helps integrate new ethical and environmental standards as they evolve.

Leslie Kanthan

Chief  Executive Officer and Founder at TurinTech AI

Dr Leslie Kanthan is CEO and co-founder of TurinTech, a leading AI Optimisation company that empowers businesses to build efficient and scalable AI by automating the whole data science lifecycle. Before TurinTech, Leslie worked for financial institutions and was frustrated by the manual machine learning developing process and manual code optimising process. He and the team therefore built an end-to-end optimisation platform – EvoML – for building and scaling AI.

Related posts

AITech Interview with Simon Yencken, Fanplayr

AI TechPark

AI TechPark Interview with Lise Lapointe, CEO at Terranova Security

AI TechPark

AITech Interview with Juan Manuel Bahamonde, Channel Advisor

AI TechPark