Tachyum offers its TPU Inference IP to Edge and Embedded Markets

Tachyum® today announced that it is expanding the unique value proposition of its Tachyum Prodigy by offering its Tachyum TPU® (Tachyum Processing Unit) intellectual property as a licensable core, allowing developers to take full advantage of intelligent, datacenter-trained AI when making IoT and Edge devices.

Tachyum’s Prodigy is the first Universal Processor combining General Purpose Processors, High Performance Computing (HPC), Artificial Intelligence (AI), Deep Machine Learning, Explainable AI, Bio AI and other AI disciplines with a single chip. With the tremendous growth of the AI chipset market for edge inference, Tachyum is looking to extend its proprietary Tachyum AI data type beyond the datacenter by providing its internationally registered and trademarked IP to outside developers.

Key features of the TPU inference and generative AI/ML IP architecture include architectural transactional and cycle accurate simulators; tools and compilers support; and hardware licensable IP, including RTL in Verilog, UVM Testbench and synthesis constraints. Tachyum has 4b per weight working for AI training and 2b per weight as part of the proprietary Tachyum AI (TAI) data type, which will be announced later this year.

“Inference and generative AI is coming to almost every consumer product and we believe that licensing TPU is a key avenue for Tachyum to proliferate our world-leading AI into this marketplace for models trained on Tachyum’s Prodigy Universal Processor chip,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “As Tachyum is the only owner of the TPU trademark within the AI space, it is a valuable corporate asset to not only Tachyum but to all the vendors who respect that trademark and ensure that they properly license its use as part of their products.”

As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) on a single architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4.5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

Flowserve Expands its RedRaven IoT Solutions for Valves

Business Wire

Geotab Prevails in Another Patent Lawsuit

Business Wire

KORE Developer Portal Wins “M2M Innovative Solution of the Year”

PR Newswire