First of its kind report incorporates views of multiple companies, including Google DeepMind and Microsoft, on best practices in responsible AI governance
EqualAI, and co-authors from Google DeepMind, Microsoft, Salesforce, PepsiCo, Liveperson, Verizon, Northrop Grumman, the SAS Institute, Amazon, and others, published a first of its kind report today on the state of responsible AI. The report incorporates views of multiple companies on establishing and implementing best practices in responsible AI governance.
Amidst recent efforts to advance industry standards and best practices for AI safety, this report could not be more timely. Business leaders must proactively establish accountability structures to identify potential risks, foster problem solving, and learn from mistakes when practicing responsible AI governance.
“EqualAI is proud to release this report that provides insight into how leading companies are navigating the challenging and critical process of governing AI responsibly,” said Miriam Vogel, President and CEO of EqualAI. “In this report, we have gathered the expertise of leaders in responsible AI adoption to present our guide on best practices on how to establish and implement responsible AI governance. At EqualAI, we have found that aligning on AI principles allows organizations to operationalize their values by setting rules and standards to guide decision making related to AI development and use.
As businesses increase their dependence on, and investment in, AI, there is an urgent need to align on best practices that promote responsible AI to earn and maintain trust in these systems. Not only is this the right thing to do, it is good business. Numerousstudies indicate that consumers overwhelmingly expect businesses to be responsible and ethical when adopting and developing AI technology. As such, organizations must earn customer trust with their AI use and to do so, must understand and implement practices to ensure their AI is responsible—meaning: safe, inclusive, and effective for all possible end users.
Currently, there is a lack of global, or even national, consensus on standards for responsible AI governance. To help companies prepare for this reality, EqualAI convenes leaders across industry, government, and civil society to align on risks, liabilities, and best practices in establishing and operationalizing responsible AI governance.
This report builds on discussions from the culminating 7th session of EqualAI’s Responsible AI Badge Program, where senior executives gathered to address best practices in responsible AI governance. The final framework they aligned on consists of following six pillars:
EqualAI Responsible AI Governance Framework
Responsible AI Values and Principles
Accountability and Clear Lines of Responsibility
Documentation
Defined Processes
Multistakeholder Reviews
Metrics, Monitoring, and Reevaluation
With these six pillars in place, organizations will be best positioned to develop, acquire, and/or implement AI responsibly. The framework further identifies key components to implement across an enterprise, including, but not limited to, securing C-suite or Board support, incorporating feedback from diverse and underrepresented communities, and empowering employees to flag potential concerns.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!