Interview

AITech Interview with Harry Wang, Chief Growth Officer, Sonar

Learn how a shift in mindset and tooling is transforming the developer experience and software integrity alike.

Harry, as the VP of Growth & New Ventures at Sonar, you’re deeply involved in the company’s AI strategy. Could you provide an overview of the AI accountability crisis and how Sonar is addressing this issue in the software development industry?

The AI accountability crisis stems from the rapid adoption of AI systems without sufficient transparency, oversight, process, or tooling in place to ensure safe and responsible use of AI. In the software development industry, this often manifests as a growing volume of incidents, issues, rework, and frustration among colleagues. Some customers have told us they are seeing their developers accept over 95% of AI coding-generated pull requests.

This suggests that the code is not being scrutinized at all — a lack of ownership. This not only undermines trust but also poses serious risks to users, organizations, and society at large.

To help address this, we at Sonar focus on encouraging developers and organizations to take a “trust and verify” approach to generative AI — we should trust and embrace the immense productivity benefit of AI while putting the right governance and tools in place to ensure high quality and security of software. Sonar can be an indispensable resource when it comes to verifying AI-generated code. With capabilities like Sonar AI Code Assurance, developers can validate AI-generated code and stay confident as they expand the use of code assistants. Starting with the latest SonarQube Server LTA (Long-Term Active) release, developers can automatically detect and review AI-generated code from GitHub Copilot.

AI coding assistants are revolutionizing the way code is generated. However, ensuring the security and quality of AI-generated code is crucial. How does Sonar balance the need for fast development with the importance of maintaining high-quality, secure code?

To support the speed of development while addressing the critical need for reliable, secure, maintainable code, organizations must invest in the right code quality and code security solutions to support developers in their workflow. As developers adopt AI coding assistance, the need for these solutions becomes even greater, because AI code generators can sometimes introduce unexpected quality or security issues.

With the proper automated code review tools in place, like SonarQube, developer teams can make quality and security a natural part of the development process, even as AI coding tools are adopted. At Sonar, we empower developers with solutions for detecting and remediating issues, helping them to better embrace the power of AI coding assistants as a helpful hand, not a hindrance or a replacement. From the IDE with SonarQube for IDE, through the CI/CD with SonarQube Server or SonarQube Cloud, developers have the opportunity to not only increase their organization’s coding output but also to focus on the work that interests them the most and utilizes their skill sets. This, in turn, simultaneously improves the developer experience and elevates software quality and the developer’s role in the software development lifecycle (SDLC).

Sonar uses a “Trust and Verify” approach to AI coding. Can you elaborate on how this model works and why it’s critical for organizations adopting AI-assisted development tools?

Humans still need to verify that the code AI generates is accurate, reliable and of high quality so that software maintains its reliability, maintainability, and security. This is necessary in order to prevent risk to the business when code is pushed to production. This starts with taking a “trust and verify” approach to contributions by AI code generation tools, where developers trust in their AI tools AND they verify that the code won’t cause issues or long-term headaches. A “trust and verify” approach — where you employ the AI and verify its output with human review — enables organizations to take advantage of the technology without taking on excessive risk, and Sonar provides solutions that allow developers to tackle these responsibilities in a scalable way.

Important to this as well is the concept of “start left” (an evolution of “shift left”) and moving portions of code review to where code creation starts, in the IDE. In order to avoid placing additional burden on developers, some of this code review must be automated. At Sonar, we empower developers to embrace this idea with the ability to catch and fix issues, for all code – human or AI generated – directly in the IDE through SonarQube for IDE in connected mode. When developer teams “start left,” code quality and security are prioritized from the earliest stages of software development, ensuring that potential issues are caught before they become costly problems. Embracing a “start left” mentality is the best way for teams to boost that confidence and verify every line of code as they increase their adoption of AI.

In your opinion, how can the adoption of AI in software development impact the overall risk management strategy of an organization? What safeguards should be in place to mitigate potential risks?

The adoption of AI in software development can significantly reshape an organization’s risk management strategy by introducing both opportunities and challenges. On one hand, AI can enhance efficiency, accuracy, and innovation in development processes to fuel business agility and enhance customer experiences. On the other hand, it introduces new risks related to security, compliance, data / IP management, and transparency that require careful management.

AI-driven tools can streamline development, but they also magnify the consequences of poor-quality or insecure code if risks aren’t properly addressed. An overlooked vulnerability in AI-generated code, for example, can lead to significant security breaches. Incorporating safeguards like SDLC controls, automated code review, ethical and compliance frameworks, continuous monitoring, and standardized code quality and security checks, to name a few, can help with this.

Sonar’s AI Code Assurance feature offers comprehensive analysis of AI-generated code. How does this tool identify and address potential issues in AI-driven code to maintain high security and quality standards?

The AI Code Assurance capability, available in SonarQube Server and SonarQube Cloud, is designed to provide developers with the confidence to use AI-generated code while maintaining the highest standards of security and quality. It works by combining deep code analysis with seamless integration into the development workflow to proactively identify and address issues in AI-driven code, and prevent code that hasn’t met strict standards from progressing in the build process without further review. Having AI generated code move through the AI Code Assurance workflow helps prevent any new code quality or security issues from slipping into production.

The AI Code Assurance workflow encourages developers to take full ownership of code, whether human-written or AI-generated. By enforcing high standards of quality and security, it guides developers through a thorough validation process, ensuring AI generated code is fully understood and verified. An AI-specific quality gate, designed to enforce strict review of AI-generated code, further protects the software development pipeline. Additionally — available in the latest 2025.1 release of SonarQube Server — developers can also benefit from autodetection of AI code in GitHub projects. The feature alerts admins when project contributors recently used GitHub Copilot, so that the code can be protected with AI Code Assurance, improving its overall quality.

Could you describe some of the key benefits organizations experience when they implement Sonar’s AI Code Assurance? Specifically, how does this solution help them reduce costly code-related issues and improve development efficiency?

AI Code Assurance helps to ensure thorough reviews and early detection of issues in AI-generated code, leading to:

  • Increased accountability: Developers become empowered to take ownership of all code, ensuring that every piece of AI-generated content is thoroughly analyzed and reviewed before it’s published.
  • Elevated visibility: Teams can easily view what projects have AI-generated code, their status, and if they are meeting quality and security standards
  • Efficient workflows: Seamless integration enables developers to continue to work efficiently without added overhead or changes to their workflow.
  • Reduction of risk: Sonar’s quality gate and SDLC controls help organizations eliminate risk and develop confidence in AI, ultimately driving wider and safer adoption of the technology.

With AI Code Assurance, organizations can prevent issues stemming from adopting AI code generation that could escalate into expensive problems, and minimize the time and cost of downstream fixes that typically could go unresolved as technical debt.


AI CodeFix seems like a game-changer for developers by streamlining the process of fixing code issues. How does the integration of AI CodeFix within a developer’s workflow enhance productivity and overall satisfaction?

Sonar AI CodeFix automatically identifies and provides suggested fixes for common vulnerabilities, bugs, and quality issues that it finds through its analysis with SonarQube (Server or Cloud), saving developer time that would otherwise be spent troubleshooting. Its real-time suggestions — discovered by our code analysis solutions SonarQube Server and SonarQube Cloud — can be opened, reviewed, and applied directly in the IDE. This ensures minimal disruption and allows developers to stay focused on building features and solving complex challenges, enhancing the developer experience.

AI CodeFix also provides clear explanations for its fixes, helping developers understand the changes and improve their skills over time. This combination of automation and education reduces frustration, accelerates development cycles, and boosts confidence. When developers are empowered to focus on what they do best, creating innovative solutions, they have higher levels of satisfaction and more efficient workflows.

In terms of user experience, how has AI CodeFix been received by developers? Are there any key metrics or feedback that highlight how this tool is improving their work processes?

In early access and in less than three months since the release of this feature, we have seen rapid adoption of AI CodeFix by our customers. In the first month of the year alone, developers used this capability to produce over 15,000 code fix suggestions for issue remediation, supporting efficient development cycles. And as part of SonarQube Server 2025.2 LTA, we’re seeing users leverage Azure OpenAI service for enhanced privacy when using AI CodeFix.

The speed of AI-assisted code generation is impressive, but so is the risk of code sprawl. How does Sonar ensure that AI tools don’t contribute to excessive complexity or unmanageable codebases over time?

Code sprawl can present a significant risk: LLMs will often produce code that ends up not being used or incorporating unused references, not only making it harder to understand and maintain your codebase, but also introducing attack vectors. For example, malicious actors can start tricking LLMs to include seemingly benign references or dependencies that are not used now, but could be used with bad intent in the future, creating a massive security hole. This is called backdoor or sleeper agent injection, and it is just one example of the many ways LLMs can be modified to produce new attack vectors.

With our SonarQube offering — SonarQube for IDE, SonarQube Server, SonarQube Cloud — we make it easy for developers to analyze AI-generated code in real-time throughout the SDLC to enforce consistency with coding standards, identify redundant, unused, or overly complex code, and ensure maintainability. Taking a “trust and verify” approach, paired with the use of SonarQube for quality controls and governance directly in the development process, help to ensure that teams are effectively taking advantage of the benefits that AI brings.

When issues are caught early, developers can avoid unnecessary duplication or inefficient structures that can contribute to code sprawl. Continuous monitoring and actionable feedback also ensure long-term code health, even as projects evolve. Going beyond mere issue remediation, Sonar also enables the prevention of technical debt accumulating in the first place. With automated code reviews and quality gate standards, developers can address the root causes of code-level technical debt and be strategic in their approach to effectively tackle this pervasive challenge that directly impacts budgets, resource allocation, and team morale.

Additionally, Sonar recently acquired Structure101, a pioneer in code structure analysis, to further the company’s promise of enabling all developers and organizations to protect the quality and security of their code, whether AI-generated or human-written. Folding Structure101’s capabilities into Sonar’s solutions allows for the identification of structural issues as code is written, rather than in review cycles further out in the development lifecycle. We’re bringing this to life today with the introduction of Architecture as Code in SonarQube. A language-independent, declarative approach, it allows teams to define architecture, store it alongside their code, and automatically verify it during CI/CD analysis. Of course, there will be more capabilities to come down the road.

Looking ahead, with AI adoption set to increase, how do you see the future of software development evolving in the next five years?

What role will Sonar play in shaping the industry’s approach to AI and code quality assurance? Over the next five years, AI will continue to revolutionize software development,
driving faster, more automated workflows and enabling developers to tackle increasingly complex challenges. In fact, in just two years, Gartner predicts that 70% of professional developers will be using AI-powered coding tools. However, this rapid adoption will amplify the need for accountability of the quality and security of AI code creation. We will see a shift to a more hybrid model in development, where AI and human expertise work in tandem. Developers will leverage AI for efficiency and speed, but human oversight will remain vital to building strong, secure software.

By providing the tools and framework necessary to uphold high standards of code quality and code security, Sonar will play a pivotal role as it always has to developers — currently, over seven million developers use Sonar to ensure their code meets quality and security standards. We aim to set the benchmark for trust and reliability in AI-assisted development, embedding real-time analysis, proactive issue detection, and actionable feedback directly into developer workflows. Our powerful code analysis tools enable developers to easily integrate with popular coding environments and CI/CD pipelines for in-depth insight into the quality, maintainability, reliability, and security of their code no matter if human or AI-generated. Having this visibility into code will permit organizations to feel confident and assured that their code is being delivered with high quality.

A quote or advice from the author: We are in the early stages of AI-driven development. To use the analogy of autonomous driving, we are not at level five (vehicles like Waymo going around the streets. We are more so somewhere in between the cruise control level and maybe level three. What’s missing at this moment is a lack of standards, governance, and tools on how to ensure the quality and security of AI-assisted software development. It’s important to be thoughtful about where and how we leverage AI in the SDLC – adopt AI to improve productivity but don’t (yet) rush to take our hands off the steering wheel and hand everything to AI. This is why we at Sonar recommend our users to take a “trust and verify” approach. 

Harry Wang

Chief Growth Officer, Sonar

Harry Wang has 20+ years in product management, strategy, business development, and engineering, building world-class products and scaling businesses globally. As Chief Growth Officer at Sonar, he leads the company’s strategic investments, partnerships, and product growth initiatives. Prior to Sonar, Harry had held several positions at Google over twelve years. Most recently, he was Director, Founder & GM of Area 120 at Google Labs, where he led the incubation of internal startups that applied genAI and federated machine learning to knowledge discovery and privacy-sensitive applications. Harry received his Ph.D. and M.S. from Cornell University. 

Sonar helps prevent code quality and security issues from reaching production, amplifies developers’ productivity in concert with AI assistants, and improves the developer experience with streamlined workflows. Sonar analyzes all code, regardless of who writes it — your internal team or genAI — resulting in more secure, reliable, and maintainable software. Rooted in the open source community, Sonar’s solutions support over 30 programming languages, frameworks, and infrastructure technologies. Today, Sonar is used by 7M+ developers and 400K organizations worldwide, including the DoD, Microsoft, NASA, MasterCard, Siemens, and T-Mobile. 

AI TechPark

Artificial Intelligence (AI) is penetrating the enterprise in an overwhelming way, and the only choice organizations have is to thrive through this advanced tech rather than be deterred by its complications.

Related posts

Interview with Greg Ball, VP of Engineering, Artificial Intelligence/Machine Learning, Relativity

AI TechPark

AITech Interview with Joel Rennich, VP of Product Strategy at JumpCloud

AI TechPark

Interview with Dr. Peter Chen, Founder & CEO at at SecuX

AI TechPark