Machine Learning

InRule Technology publishes The End of AI Ambiguity

InRule Technology

“The End of AI Ambiguity,” a new research study from InRule Technology®, looks at ethical risks associated with AI and how explainability helps overcome those concerns

InRule Technology®, an intelligence automation company providing integrated decisioning, explainable AI and digital process automation software to the enterprise, today published The End of AI Ambiguity, a newly commissioned research study conducted by Forrester Consulting on behalf of InRule. The study found that ethical worries around artificial intelligence (AI) and machine learning (ML) stymie the implementation of AI/ML decisioning. In fact, 66 percent of AI/ML decision makers stated that current AI/ML offerings are unable to meet their organization’s ethical business goals.

Respondents worry that harmful bias can lead to inaccurate (58 percent) or inconsistent (46 percent) decisions, decreased operational efficiency (39 percent), and loss of business (32 percent). The study concludes that addressing these ethical risks by leveraging human accountability within AI-powered process automation is central to enabling decision makers to better predict customer needs and personalize solutions.

The research found that nearly 70 percent of decision-makers agree involving humans in decisioning with AI/ML reduces risks associated with these technologies, but to keep humans in the loop AI systems need native explainability functionality. Automating human governance and engaging a wider group of stakeholders improves decisions and model transparency.

Linking explainability with a human touch also unlocks other benefits: assurance to stakeholders that AI/ML can be safely used (59 percent), reduced regulatory risk (51 percent), and fairer models (48 percent). “Right to explainability” legislation is spreading with the Algorithmic Accountability Act of 2022 proposed in Congress and the European Union pushing for stricter AI regulations abroad, as well. Businesses must take steps today to ensure they can prove the accuracy and fairness of their algorithms.

According to Rik Chomko, CEO and co-founder of InRule, “AI is consistently ranked by c-suite executives as critically important to the future of their business, yet two-thirds of those surveyed by Forrester Consulting have difficulty explaining the decisions their AI systems make. Built-in, native explainability empowers non-data scientists and c-suite executives to quickly understand why a decision was made and take confidence in the outcomes of intelligence automation.”

Download the full research study and recommendations here.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

Business Wire

Business Wire is a trusted source for news organizations, journalists, investment professionals and regulatory authorities, delivering news directly into editorial systems and leading online news sources via its multi-patented NX Network. Business Wire has 18 newsrooms worldwide to meet the needs of communications professionals and news media.

Related posts

Anchore Secures Containers for AI, ML and HPC on NVIDIA NGC

PR Newswire

Tickeron Releases AI Trading Robots with Trailing Stops

PR Newswire

Deep Learning-driven MyHeritage Releases Photo Repair

Business Wire