New enhancements will enable organizations to meet growing regulatory and customer demands for transparency and further build trust in critical AI systems
Credo AI, the governance company operationalizing Responsible AI, today announced the general availability of new assessment and reporting capabilities in its Responsible AI Governance Platform. These enhancements will enable enterprises to easily meet new regulatory requirements and customer demands for governance artifacts, reports and disclosures on their development and use of AI, with a focus on assessing and documenting Responsible AI issues like fairness and bias, explainability, robustness, security, and privacy.
This release is the latest addition to Credo AI’s software that helps enterprises manage AI risk and compliance at scale. The new feature set allows organizations to standardize and automate reporting of Responsible AI issues across all of their AI/ML applications.
These features were developed in response to the growing call for transparency and documentation of AI systems from regulators, customers and consumers. Increasingly, the world is demanding to know how AI systems behave, particularly when it comes to issues like fairness and bias. Forthcoming regulations like New York City’s algorithmic hiring law and the EU AI Act will soon mandate that organizations building, buying and using AI conduct regular assessments or audits of their AI tools and publish reports for public consumption. Recently, the White House also introduced a blueprint for an AI Bill of Rights which provides guidance on the design, use and deployment of AI. And last month, the House of Representatives Committee on Science, Space, and Technology held a hearing on managing the risks of AI where tech leaders including Credo AI’s founder and CEO Navrina Singh discussed the need for context-focused governance and transparent reporting.
Credo AI enables customers to comply with upcoming regulations and address their customers’ questions and concerns about the AI systems they’re offering and implementing. The platform is already in use at Fortune 100 enterprises in the financial services, insurance, high tech, and aerospace and defense sectors, which are using it to generate governance artifacts and reports on the fairness, performance, and governance of their AI systems to share with customers and regulators.
The product update also includes enhancements to the Platform’s integration with Credo AI Lens, an open source Responsible AI assessment framework, to enable programmatic technical assessments of fairness and bias, explainability, robustness, security, and privacy of ML models and datasets, significantly reducing the burden of Responsible AI reporting and documentation on technical teams.
“Credo AI is building the governance layer that will empower organizations to ensure that all of their internal and third-party AI is meeting business, regulatory and ethical requirements,” said Navrina Singh, founder and CEO of Credo AI. “This product release is the next step in our journey to bringing context focused governance and accountability to AI. Not only will this solution help companies bring their AI into compliance, but also ensures that their AI is working in alignment with human-centered values.”
Credo AI’s newest product capabilities were informed by conversations across the RAI ecosystem. For the past two years, Credo AI has been actively building a community of practice in RAI with stakeholders from private, public and government sectors. Last week, Credo AI brought this community together at its inaugural Global Responsible AI Summit to amplify the Responsible AI movement globally, catalyze practical action and bridge the gap between diverse groups of stakeholders to collectively advance a future society and economy that is positively impacted by AI.
View the Summit recordings here. Join the Responsible AI Community waitlist here.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!