AI

Armilla AI launched AutoGuard™

Armilla AI, the global leader in generative AI alignment, safety and responsible AI, is proud to announce the launch of AutoGuard™, the first truly intelligent firewall to help enterprises deploy generative AI models safely and protect both users and enterprises from potential harms. The capabilities of generative AI are remarkable, but its shortcomings bring significant risks for enterprises. From eliminating bias and reducing hallucinations to enhancing safety and eliminating privacy issues, AutoGuard is a complete solution for deploying safe and responsible enterprise-grade generative AI solutions.

AutoGuard joins AutoTune™ as the second product in its AutoAlign™ generative AI platform. Both use the same AutoAlign framework incorporating auto-feedback fine-tuning and come with a library of off-the-shelf controls, like privacy protection, protection against confidential information leakage, gender assumption, jailbreaking protection and racial bias detection, or can be tailored with specific alignment controls. This means customers can enforce how their AI model behaves, expressed in natural language narratives.

“There’s a growing call within the industry that as generative AI scales, we must provide protections at runtime to ensure these models are performing safely. Based on our own testing, the feedback from our customers and industry experts, enterprises need these guardrails to be compliant with initiatives like Canada’s new voluntary code of conduct for generative AI” says Rahm Hafiz, CTO of Armilla. “Our approach is more powerful than syntax-based guardrails or traditional fine-tuning alone, and is critical to scaling safe, reliable deployments as enterprises bring their models to market.”

Armilla’s AutoGuard solution can generate its own data and can fine-tune a customer’s guardrail using automated feedback based on how the enterprise has asked the AI model to behave. Throughout the process, Armilla’s own AI aligns AutoGuard through a series of steps: understanding the goals for the target model, generating data to capture the bounds of the model’s behavior, iteratively testing, discovering weak spots, and finally hardening its own guardrail model.

The result is a user-friendly platform that makes generative AI safer, trustworthy, and ethical while deployed. AutoGuard is currently being used by a select group of clients, including:

  • HR software companies applying generative AI to their HR processes but requiring fairness to be built into their solutions
  • Financial institutions looking to develop responsible generative AI solutions
  • Consulting firms dealing with confidential and sensitive data

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

OPPO Launches New Products for AI-powered Touch-free Interactions

PR Newswire

Alteryx Acquires Lore IO to Enhance Enterprise Analytics

PR Newswire

Radin Health Adds CCO, Marc Shapiro, to its C-Suite

PR Newswire