Interview

AITech Interview with Mehdi Daoudi CEO and co-founder of Catchpoint.

AITech Interview with Mehdi Daoudi CEO and co-founder of Catchpoint.

Dive into AI Resilience: Mehdi Daoudi on GenAI Performance, Reliability, and Business Impact.

Hello Mehdi. Welcome to AItech Park. Could you start with your professional background and your journey of becoming CEO & Cofounder at Catchpoint?
My career started in ad tech at DoubleClick, where I spent a decade making sure ads loaded fast. Eventually, I led the Quality-of-Service team, which evolved into Observability after Google acquired the company. Along the way, I managed to take down 5,000 ad servers with a single mistake—a hard but invaluable lesson in resilience. Back then, I was scared my boss was going to fire me. But luckily, he had my back, and this really ingrained in me the importance of radical honesty and transparency.

The experience at DoubleClick stayed with me. In 2008, I co-founded Catchpoint, which was built on a clear mission: prevent failures before they happen. And, today, we’re the Internet Resilience company, ensuring that the world’s biggest brands don’t go down. To understand the significance of what we do, it’s important to note that our recent Internet Resilience Report 2024 found that 43% of businesses lose $1M+ per month due to outages. These kinds of IT issues need to be boardroom priorities.

What is your perspective on the current state of Generative AI (GenAI) in businesses, and how do you see its impact on decision-making, automation, and customer interactions?
As more organizations integrate AI into their processes, it becomes another external dependency that must be monitored. AI will be critical to many business processes, making its monitoring just as essential as that of any other system.

Our 2024 GenAI Benchmark Report highlights that performance varies significantly across platforms—ChatGPT had the longest authentication times globally, while watsonx.ai was three times slower than H2O.ai. If an AI tool lags or produces unreliable outputs at a critical moment, it can significantly disrupt workflows and impact business outcomes. Businesses that proactively monitor and optimize AI performance will gain a major competitive edge.

According to the recent Catchpoint 2024 GenAI Benchmark Report, H2O.ai outperformed other platforms in terms of performance. What do you think sets H2O.ai apart in terms of speed and reliability?
H2O.ai is winning because it optimized what matters: 1) response time and 2) efficiency. Our benchmark report found that H2O.ai had the fastest response times across nearly every country, even compensating for high connect and SSL times with better load-time efficiency.

The report highlighted that platforms like ChatGPT and watsonx.ai faced challenges with response times and user authentication. How critical are these challenges for businesses relying on these platforms for real-time decision-making?
Slow response times and authentication failures erode trust and break workflows. Just as users abandon slow websites, they’ll drop unreliable AI tools that fail them at critical moments.

Why might GenAI failures soon be more costly for businesses than simple website outages? Could you elaborate on the potential impact on customer trust and operational efficiency?
McKinsey reports that 3x more employees are using GenAI for a third or more of their work than their leaders imagine, and more than 70% of all employees believe that within 2 years, GenAI will change 30% or more of their work. GenAI is replacing traditional search and decision-making. If there’s an AI outage, it could soon be as costly as an internet outage, and businesses need to be ready.

GenAI platforms are experiencing significant delays, what role does infrastructure resilience play in ensuring that these platforms can scale and perform efficiently across global markets?
It’s critical. If ChatGPT slows down, it’s not the model failing per say, but it is the infrastructure buckling under demand spikes. Our GenAI Benchmark Report found that Google Gemini’s response time in South Africa was 6x slower than in other countries due to a redirect delay. Meta AI’s connect time in Italy was 10x slower than in the U.S., because Italian users were being routed to U.S. servers. If companies aren’t monitoring response times, regional discrepancies, and API failures, they’re flying blind and will likely lose users.

How can businesses better leverage Internet Performance Monitoring (IPM) to ensure that GenAI tools are reliable and scalable?
IPM provides many benefits, including:

  1. Proactive monitoring – Catch AI slowdowns, network congestion, and regional performance drops before they become major issues.
  2. Full-stack visibility – From DNS lookups to SSL handshake delays, knowing where AI is slowing down is half the battle.
  3. Incident prevention – Our Internet Resilience Report found that 43% of businesses lose $1M+ per month to outages. Monitoring AI dependencies reduces that risk.
  4. If AI is a mission-critical tool, companies need mission-critical monitoring.

In terms of future-proofing GenAI deployments, what should organizations prioritize to maintain long-term performance and minimize operational disruptions?
They should diversify AI providers. If one platform fails, have a backup, and monitor every layer. GenAI depends on APIs, databases, cloud providers—monitor them all. Moreover, expect failures and plan for resilience by building AI infrastructure assuming it will break. Our Internet Resilience Report found that 97% of businesses say a resilient Internet Stack is crucial to success, yet many still take a reactive approach. Companies that treat AI like an essential utility will win. Those that don’t will be scrambling to fix problems after they happen.

How do you envision the evolving relationship between AI and internet infrastructure in the next few years? What will be the key considerations for businesses as they scale their use of AI technologies?
AI will push Internet infrastructure to its limits in both exciting and challenging ways. The demand for real-time inference, low-latency processing, and constant uptime will force businesses to rethink their entire architecture. As I mentioned in my previous response, winners will treat AI like an essential utility—scalable, redundant, and constantly monitored. Everyone else? They’ll be stuck firefighting failures.

What advice would you like to recommend for businesses in order to ensure they are not only adopting GenAI tools effectively, but also safeguarding their performance, reliability, and long-term success?
Deploy AI but also monitor it. AI failures are inevitable, and it’s important to build resilience. If your AI is slow, your business is slow. Treat AI latency like lost revenue, which I’m sure we’ll have hard financial numbers for soon.

A quote or advice from the author

On a mission, with the most talented people, to build a better observability platform and help companies deliver on their promise to their end-users and employees.

Mehdi Daoudi

CEO and co-founder of Catchpoint

Mehdi Daoudi is CEO and co-founder of Catchpoint, the Internet Resilience company, which he started in 2008. His experience in IT inspired him to build the digital experience platform he envisioned as a user. He spent more than ten years at Google and DoubleClick, where he was responsible for quality of services, buying, building, deploying, and using internal and external monitoring solutions to keep an eye on the DART infrastructure, which delivers billions of transactions a day. Mehdi holds a BS in international trade, marketing, and business from Institut Superior de Gestion (France). 

AI TechPark

Artificial Intelligence (AI) is penetrating the enterprise in an overwhelming way, and the only choice organizations have is to thrive through this advanced tech rather than be deterred by its complications.

Related posts

AITech Interview with Yashin Manraj, Chief Executive Officer at Pvotal

AI TechPark

AITech Interview with the CPO, StrikeReady – Anurag Gurtu

AI TechPark

AITech Interview with Jordi Torras, Founder and CEO at Inbenta

AI TechPark