Database Security

Paperclip SAFE enables Advanced Data Security for GenAI

Organizations can now confidently tackle GenAI projects while keeping data secure

Paperclip Inc. today announced that its always-encrypted SAFE® solution now secures private data for use in generative AI (GenAI) projects. By implementing Paperclip SAFE encryption, organizations can confidently tackle GenAI projects knowing their sensitive, controlled, and private data is cryptographically secure, leveraging the benefits of Paperclip’s breakthrough searchable encryption technology.

“It has become clear that GenAI is here to stay,” said Chad Walter, Chief Revenue Officer at Paperclip. “There are millions of use cases where GenAI can create efficiencies and help us work smarter. The challenge that’s evident is the Large Language Models (LLMs) that drive GenAI lack the ability to secure sensitive and private data.”

“Paperclip SAFE is uniquely positioned to protect that sensitive, controlled, or private data—unlike any other solution on the market,” Walter added.

GenAI tools like ChatGPT that leverage public internet data are now easily available and growing in popularity. Company leadership is under immense pressure to leverage GenAI to get more out of their datasets but are met with cybersecurity concerns regarding the risk of sharing sensitive, controlled, or private data with public LLMs.

Even private GenAI services such as retrieval-augmented generation (RAG) implementations—that allow embeddings to be computed locally on a subset of data—have data privacy and security implications. While the C-suite and investors push for rapid GenAI adoption, security and compliance teams that better understand the inherent risks are forced to choose between innovation and exposing the company and their clients to further risk of data theft, manipulation, or ransom.

“Paperclip SAFE will help aid in the adoption of GenAI by ensuring private data is always encrypted at all times,” Walter said. “As a SaaS solution, Paperclip SAFE is designed as a plugin that aligns to the vector databases and learning datasets that drive LLMs, which are the backbone of GenAI.”

As with traditional database security, encryption is the only way to truly protect the PII and PHI (or any sensitive, controlled, and private) data. Paperclip SAFE is the only always-encrypted data security solution that utilizes full NIST approved AES-256 encryption and operates at the speed of business. In addition, Paperclip’s patented shredding technology goes beyond encryption for another layer of critical protection—shredding data, storing each shred only once and breaking context, resulting in probabilistic encryption.

To better understand SAFE’s role in protecting PII and PHI in LLMs, read our latest blog post: The Risk of Personal Identifiable Information (PII) in Large Language Models (LLMs).

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

TrustLogix recognized in Gartner for Data Security, 2023

PR Newswire

Marios Damianides Joins ShardSecure’s Advisory Board

PR Newswire

RealBlocks Announces SOC 2 Type II Security Compliance

Business Wire