Powered by Phison’s aiDAPTIV+, ONEai enables LLM training and inferencing with plug-and-play deployment at the storage layer
StorONE, the developer of the most efficient storage platform, delivering unmatched data protection and flexibility with minimal hardware, today announced ONEai, the turnkey, automated AI solution for enterprise data storage. In partnership with Phison Electronics (8299TT), a leading innovator in NAND flash technologies, StorONE integrated Phison’s aiDAPTIV+ AI capabilities into the StorONE enterprise storage system to accelerate AI deployment and deliver domain-specific responses on the stored data for end users. ONEai leverages AI GPU and memory optimization, intelligent data placement and built-in support for LLM inferencing and fine-tuning directly within the storage framework, offering an efficient, AI-integrated system with minimal setup complexity. With ONEai, users benefit from reduced power, operational and hardware costs, enhanced GPU performance and on-premises LLM training and inferencing on proprietary organizational data.
As organizations grapple with how to gather and garner insights on stored data, IT leaders and data infrastructure managers are challenged to extract findings from multi-terabyte to -petabyte level data pools with limited AI capabilities. Previously, many that sought to leverage proprietary data in a secure manner were required to build complex AI infrastructure or navigate the regulations and costs of off-premises solutions. The high-performance storage elements of these traditional approaches are often provided as a back-end for LLM training, requiring external orchestration, AI stacks and cloud or hybrid workflows.
To solve this challenge, StorONE partnered with Phison to offer ONEai for fully automated, AI-native LLM training and inferencing capabilities directly within the storage layer. ONEai automatically recognizes and responds to file creation, modification and deletion, delivering real-time insights into data stored in the storage system. This AI-integrated storage solution is optimized for fine-tuning, RAG and inferencing, features integrated GPU memory extensions and simplifies data management via a very user-friendly GUI, eliminating the need for complex infrastructure or external AI platforms.
“ONEai sets a new benchmark for an increasingly AI-integrated industry, where storage is the launchpad to take data from a static component to a dynamic application,” said Gal Naor, CEO of StorONE. “Through this technology partnership with Phison, we are filling the gap between traditional storage and AI infrastructure by delivering a turnkey, automated solution that simplifies AI data insights for organizations with limited budgets or expertise. We’re lowering the barrier to entry to enable enterprises of all sizes to tap into AI-driven intelligence without the requirement of building large-scale AI environments or sending data to the cloud.”
“We’re proud to partner with StorONE to enable a first-of-its-kind solution that addresses challenges in access to expanded GPU memory, high-performance inferencing and larger capacity LLM training without the need for external infrastructure,” said Michael Wu, GM and President of Phison US. “Through the aiDAPTIV+ integration, ONEai connects the storage engine and the AI acceleration layer, ensuring optimal data flow, intelligent workload orchestration and highly efficient GPU utilization. The result is an alternative to the DIY approach for IT and infrastructure teams, who can now opt for a pre-integrated, seamless, secure and efficient AI deployment within the enterprise infrastructure.”
Capabilities in ONEai include:
Integrated AI processing at the storage layer
- Native LLM training and inference built directly into the storage stack; no external AI infrastructure required
- ONEai eliminates the need for a separate AI stack or in-house AI expertise (plug-and-play deployment) with full on-premises processing for complete data sovereignty and control over sensitive data
GPU optimization and performance efficiency
- High GPU efficiency minimizes the number of GPUs required, reducing power and operational costs. Integrated GPU modules reduce AI inference latency and deliver up to 95% hardware utilization
Real world use case alignment
- Tailored for real customer environments to enable immediate interaction with proprietary data, ONEai automatically tracks and updates changes to data, feeding them into ongoing AI activities
ONEai will be generally available in Q3 2025. To learn more or to receive a demo, visit the StorONE website.