New bias detection report enables users to confidently audit their machine learning models for bias at a glance with an easy-to-understand visualization
InRule Technology®, an intelligence automation company providing integrated decisioning, machine learning and process automation software to the enterprise, today announced the release of a bias detection report, a best-in-class tool for evaluating machine learning models for harmful bias. The report furthers InRule’s mission to make automation accessible across the enterprise by eliminating the complexities of programming through no-code, explainable solutions.
Building on InRule’s powerful bias detection capabilities introduced earlier this year, the bias detection report enables InRule Machine Learning users to quickly decipher where harmful bias may be present in models. This report provides unparalleled explainability and empowers users to swiftly assess models for harmful bias to prevent undesirable performance for individuals with protected characteristics, such as age, race, religion, etc.
Recent InRule research found that business leaders worry that harmful bias can lead to inaccurate (58 percent) or inconsistent (46 percent) decisions, decreased operational efficiency (39 percent), and loss of business (32 percent). With the bias detection report, enterprise users can de-risk their machine learning programs.
This bias detection report is a valuable tool for data science teams seeking confirmation that a model can be safely deployed. Beyond use by data scientists in model creation, the report can provide insights to technical leadership prior to model deployment.
“Many organizations hesitate to take advantage of the power of machine learning as they are keenly aware that deploying biased models exposes them to a range of regulatory and reputational risks,” said Danny Shayman, AI and machine learning product manager, InRule. “InRule’s bias detection report adds another layer to our bias detection capability, empowering teams to deploy machine learning models with confidence.”
InRule’s bias detection report couples explainable machine learning with a high-capacity clustering engine to assess the deepest subsets of a model through millions of data paths, ensuring the model operates with equal fairness within and between groups of people it learned to treat similarly. Conversely, most machine learning platforms that offer bias detection only evaluate for bias by averaging values across an entire model.
Once a model is trained, InRule’s semi-supervised clustering technology forms groups of predictions made for similar reasons. Subsequently, the bias testing within InRule Machine Learning evaluates those clusters with statistical tests to assess whether the attributes that make those predictions similar to each other are also correlated to protected characteristics.
The bias detection report for InRule Machine Learning is now available as part of the InRule free trial experience. Request a trial at www.inrule.com/free-trial.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!