By embedding experimentation into observability, Datadog enables teams to innovate safely in the age of AI
Datadog, Inc., (NASDAQ: DDOG), the leading AI-powered observability and security platform, today announced that Datadog Experiments is available to customers everywhere. The new product enables teams to design, launch, and measure product experiments and A/B tests directly within the Datadog platform—giving teams the data and insights they need to understand how every change affects user behavior, application performance and business outcomes.
Modern product teams rely on experimentation to validate new features and optimize user experiences. However, today’s tools are disconnected from business data systems, forcing teams to stitch together multiple solutions—such as a product analytics vendor, a standalone experimentation platform and a monitoring tool—creating fragmented workflows and blind spots between product changes and application performance. This gap becomes even more pronounced as AI accelerates feature development and release velocity.
“The faster teams ship, the more expensive it becomes to not know what’s working. When signals are scattered across disconnected tools, teams make decisions with incomplete information—missing what’s actually driving revenue and killing the bold bets that will move the business forward,” said Yanbing Li, Chief Product Officer at Datadog.
Datadog solves this problem with the first experimentation platform that combines business metrics from a customer’s data warehouse with product analytics events and application observability. Powered by Datadog’s acquisition of Eppo, Datadog Experiments pairs best-in-class statistical methods with real-time observability guardrails so companies can test what matters, move quickly and ship with confidence. The product empowers every product manager, designer and engineer at a company to take a measured approach to change—a must-have in the age of AI.
Datadog Experiments enables teams to:
- Accelerate decisions without the overhead: Experimentation is self-serve and standardized, so teams can move from insight to decision without coordination overhead.
- Run safer, higher-quality experiments: Built-in guardrails, real-time feedback and shared standards help teams catch issues early, protect users and keep experiments valid.
- Make decisions leaders trust: Results are credible, reproducible and comparable by measuring impact directly against source-of-truth business metrics in native data warehouses, using consistent methodologies teams can audit and trust.
“AI has increased the pace and complexity of software releases exponentially. Too often, though, teams are flying blind when it comes to measuring the efficacy of new code. That’s because they don’t have a uniform way to validate changes and monitor their impact,” said Li. “With Datadog Experiments, teams have the guardrails needed to safely validate AI-driven changes. By tying experiments to Real User Monitoring (RUM), Product Analytics, APM and logs, organizations can measure both business impact and performance implications to reduce risk without slowing innovation.”
Datadog Experiments is now generally available. To learn more, please visit: https://www.datadoghq.com/blog/experiments/.
Explore AITechPark for the latest Artificial Intelligence News advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!
