Tachyum® today announced that it has successfully ported the eBPF Just-In-Time (JIT) compiler on its Prodigy® Universal Processor software emulation platform.
eBPF is a technology that can run sandboxed programs in a privileged context, such as the operating system kernel. It is used to extend the capabilities of the kernel safely and efficiently without requiring changes to the kernel source code or having to load kernel modules. Uses include kernel tracking, profiling and debugging; performance profiling; network packing filtering; security policies and events; and task scheduling. eBPF JIT is around 10 times faster than a generic eBPF interpreter.
Tachyum’s engineers ported Kprobes (Kernel Probes), which plays an important role in the eBPF JIT technology and serves as trigger for eBPF subroutines.
“Our software emulation system is an important part of ensuring that existing applications can run optimally on Prodigy processors and verifies that they are fully able to reap the benefits of high performance, low power and lower total cost of ownership when compared to running in traditional data centers,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “Porting the eBPF JIT compiler to our system is a valuable enhancement for Tachyum’s customers and an important step in unlocking the full potential of Prodigy.”
Tachyum’s demonstration, available in a video at https://youtu.be/NWzhR2VP9lU , shows an Eunomia execsnoop example. Eunomia is an open-source organization dedicated to exploring and enhancing the eBPF ecosystem.
As a Universal Processor offering industry-leading performance for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) with a single homogeneous architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 256 high-performance custom-designed 64-bit compute cores to deliver up to 18x the highest performing GPU for AI applications, 3x the performance of the highest-performing x86 processors for cloud workloads, and up to 8x that of the highest performing GPU for HPC.