AITech Interview with Geoff Tate, Founder and CEO, Flex Logix

Geoff Tate, the Founder and CEO of Flex Logix sheds light on what can companies do to speed the development of AI products and services.

1. Can you give us a brief of your career before Flex Logix

Prior to founding Flex Logix, I was the founding CEO of Rambus, which I led through to an IPO with a $2 billion market cap. Prior to that I was a senior vice president for microprocessor and logic at AMD.

2. How was the initial phase of setting up Flex Logix? What does a regular day at work look like now?

Flex Logix was formed in 2014.  My co-founder Cheng Wang developed a breakthrough interconnect for use in computing chips and we saw that this technology could revolutionize reconfigurable computing. This interconnect significantly improved the performance and efficiency of reprogrammable logic technology called Field Programmable Gate Array or FPGA. 

Being the CEO of a fast growing start-up is a tough job.  My typical day involves starting work early, working late and meetings throughout the day. But I usually bike 8 miles to and from work every day to get a break.

3. What is eFPGA and what are EFLX eFPGA benefits? What differentiates you from the other vendors?

Flex Logix’s Embedded FPGA (eFPGA) enables semiconductor chip designers to have the flexibility to easily change the way a chip operates (including after it is in a customer’s final product.)  Prior to this, if a chip design was flawed and needed to be changed – which was often the case –  the chip would need to be re-manufactured with the needed changes (something called a re-spin).  Re-spinning a chip is a process that costs millions of dollars and takes months to years of time.  Our eFPGA business has been highly successful.  In fact, in April 2022 we announced that we have reached a significant milestone of signing licenses to develop more than 32 ASICs/SoCs integrating EFLX, with nearly half already working in silicon. While many of these design wins are confidential, the customers that have publicly announced include Air Force Research Laboratory, Boeing, DARPA, Datang Telecom/MorningCore Technology, Renesas/Dialog, Sandia National Labs, SiFive, Socionext, and the U.S. Department of Defense.

After the success of our eFPGA FPGA technology, we realized that the same technology that revolutionized chip design could be applied to create an AI inference accelerator that was better than any other AI accelerator on the market today. Thus, we decided to develop our own AI inference accelerator. At this point, we reorganized the company into two distinct business units:  eFPGA and AI inference.

Most AI Inference solutions use graphics processing (GPU) technology, but GPU technology is not offered at a price/performance point required for mainstream products for the mass market.  As a result, high performance AI inference capabilities have only been available in very high-end and expensive systems.  Our AI inference chip brings a more efficient approach to AI inferencing, thereby opening up a whole new world of markets and applications.  This technology can now be used for autonomous vehicles, surveillance, facial recognition, genomics/gene sequencing, industrial inspection, medical imaging, retail analytics and many more applications.  The applications that need the ability to ‘inference’ and make a decision or provide an answer are endless.

4. Can you shed some light on AI inference and let us in on a few InferX products? 

Flex Logix’s inference architecture is unique. It is optimized for very quick inferencing of complex megapixel vision applications. It combines numerous 1 tensor processors with reconfigurable, high bandwidth, non-blocking interconnect that enables each layer of the neural network model to be configured for maximum utilization, resulting in very high performance with less cost and power. The connections between compute and memory are reconfigured in millionths of a second as the model is processed. This architecture is the basis of Flex Logix’s InferX™ X1 edge inference accelerator which is now offered in the following board configurations:

  • X1M boards. At roughly the size of a stick of gum, the new InferX X1M boards pack high performance inference capabilities into a low-power M.2 form factor for space and power constrained applications such as robotic vision, industrial, security, and retail analytics. InferX X1M board offers the most efficient AI inference acceleration for advanced edge AI workloads such as Yolov5. The boards are optimized for large models and megapixel images at batch=1. This provides customers with the high-performance, low-power object detection and other high-resolution image processing capabilities needed for edge servers and industrial vision systems.
  • InferXÔ X1P1 PCIe accelerator board designed to bring high-performance AI inference acceleration to edge servers and industrial vision systems. The InferX X1 PCIe board provides customers with superior AI inference capabilities where high accuracy, high throughput and low power on complex models is needed. 

5. Could you give a sneak peek into the recent developments at Flex Logix?

We just announced the availability of EasyVision Platforms designed to help customers get to market quickly with edge computer vision products. EasyVision features the industry’s most efficient edge AI accelerator, the InferX, along with ready-to-use models that are trained to perform the most common object detection capabilities such as hard-hat detection, people counting, face mask detection, and license plate recognition.

There is an explosive demand today for edge vision solutions that bring AI capabilities to a wide range of products and services – yet many companies lack the expertise or data science know-how to develop and train models and then integrate them with existing AI accelerators.
With EasyVision, we are essentially providing an AI/ML ‘platform in a box’ that has the AI model already trained and ready to integrate into an existing application and it’s also been fine tuned to work with a hardware accelerator that is fast and accurate.

Flex Logix’s EasyVision is available today as a complete solution that includes a server, accelerator card, and all the software needed to do the object detection. EasyVision platforms are based on standard computing hardware and include a USB Camera for quick and easy trials and production deployments. A set of initial trained models, including models for hardhat detection, people counting and others, are provided with the EasyVision platform. The AI vision model library will be continually refreshed over time in response to customer requirements.

6. Can you brief us about Flex Logix’s upcoming events?

We keep our events page regularly updated on our website at      Some upcoming events include:

August 30, 2022 – TSMC Taiwan Technology Symposium, Hsinchu 

–          Come to Flex Logix’s table and learn how eFPGA can impact TSMC’s smartphone, HPC, IoT, and automotive platform solutions and why Flex Logix is the number one eFPGA vendor in the world supporting more TSMC technologies than any other eFPGA vendor.

September 2, 2022 – TSMC Japan Technology Symposium, Yokohama

  • Come to Flex Logix’s table and learn how eFPGA can impact TSMC’s 5G, smartphone, HPC, IoT, and automotive platform solutions and why Flex Logix is the number one eFPGA vendor in the world supporting more TSMC technologies than any other eFPGA vendor.

September 5, 2022 – TSMC 2022 Japan Virtual Technology Symposium 

  • Come to Flex Logix’s virtual booth and learn how eFPGA can impact TSMC’s 5G, smartphone, HPC, IoT, and automotive platform solutions and why Flex Logix is the number one eFPGA vendor in the world supporting more TSMC technologies than any other eFPGA vendor.

7. What can companies do to speed the development of AI products and services?

So many companies in a wide range of industries want to incorporate AI capabilities into their products because it makes them smarter, more useful and more effective. There are several key things that companies should do to speed the development of their platform:

  • Choose the right AI inference hardware.  All too often companies get to the finish line with their products only to find out that the performance is not what they expected.  One thing that is extremely important to look for is overall throughput of the AI acceleration hardware.  Good inferencing chips are now being architected so that they can move data through them very quickly, which means they have to process that data very fast, and move it in and out of memory very quickly.  Often times, chip suppliers throw out a wide variety of performance figures such as TOPS or ResNet-50, but system/chip designers looking into these soon realize that these figures are generally meaningless. What really matters is what throughput an inferencing engine can deliver for a model, image size, batch size, and environmental conditions. This is the number one measurement of how well it will perform, but amazingly very few vendors provide it.
  • Look for Turnkey Solutions that Make it Easier to Integrate AI.  Bringing AI products to market is a complex task that often requires data science expertise, which many companies don’t have.  Products like EasyVision take much of the complexity out of the process by bringing ready-to-use models that are already fine tuned to an AI inference accelerator.

8. Could you tell us more about your team and how they support you?

We have assembled a world class team with locations in Silicon Valley, Austin, Texas and Vancouver Canada. Everyone on our team is committed to making our customers successful which I believe is the key to a company’s success.  This team supports me on a regular basis with their knowledge and expertise, enabling me to better steer the company in the right direction and make sure we are always delivering solutions our customers need to be successful.

9. What is the biggest piece of advice you could provide to company leaders about helping them fulfill the needs of their team?

A company is only as good as its employees because innovative technology rises from their expertise.  At Flex Logix, we are fortunate to have picked some of the best engineers in both hardware and software in the industry.  We make sure we challenge all of our employees and provide them opportunities for continued growth so that they want to work at Flex Logix for the long term.

10. What motivates you to get up every morning? What do you take pride in when it comes to your company?

We have two very strong business units with growth potential in each.  On the eFPGA side, Flex Logix set out to be for FPGA what Arm is for processors. This is now a cash-positive business and is growing rapidly as volume applications look to integrate FPGA into their SoCs to save power.  We believe this original eFPGA business can grow substantially as eFPGA becomes ubiquitous in SoCs, while our second line of business is driving edge AI Inference capabilities into high volume applications, thus growing the market to the billions of dollars that market forecasters predict. We are building a great company that makes the world a better place by enabling superior products that improve our health, our safety, our environment, and more.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Geoff Tate

Founder and CEO, Flex Logix

Geoff Tate is CEO and co-founder of Flex Logix, Inc. Earlier in his career, he was the founding CEO of Rambus where he and 2 PhD co-founders grew the company from 4 people to an IPO and a $2 billion market cap by 2005. Prior to Rambus, Mr. Tate worked for more than a decade at AMD where he was Senior VP, Microprocessors and Logic with more than 500 direct reports.

Prior to joining Flex Logix, Mr. Tate ran a solar company and served on several high tech boards. He currently is a board member of Everspin, the leading MRAM company.

Mr. Tate is originally from Edmonton, Canada and holds a Bachelor of Science in Computer Science from the University of Alberta and an MBA from Harvard University. He also completed his MSEE (coursework) at Santa Clara University.

Related posts

Interview with Dimitris Vassos, CEO, Co-founder & Chief Architect at Omilia

AI TechPark

AITech Interview with Morgan Slade, CEO at CloudQuant

AI TechPark

AITech interview with the CEO, KORE – Romil Bahl

AI TechPark