InferX is ~30 times the DSP performance/mm2 than eFPGA
MOUNTAIN VIEW, Calif., March 6, 2024 /PRNewswire/ — Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP and reconfigurable DSP/AI solutions, announced today that InferX DSP is in development for use with existing EFLX eFPGA from 40nm to 7nm.
“Many customers use embedded FPGA for high performance signal processing and want more throughput from less silicon area,” said Geoff Tate, CEO of Flex Logix. “InferX is a tensor-processor with 128 INT16 MACs and a coefficient matrix which is delivered as Soft IP. The TPU packs ~15 more MACs/mm2 than eFPGA and can process at about double the frequency. One, two, four, eight or 16 TPUs can be controlled by existing EFLX eFPGA from 40nm to 7nm. Flex Logix writes the “soft logic” (Verilog) code for all of the DSP operators and the customer programs at a high level using Simulink and the InferX Compiler, which are in development now for a major customer. The InferX solution is reconfigurable at boot time or during execution.”
“InferX with existing EFLX eFPGA allows us to deliver much higher DSP throughput at much lower cost than eFPGA with MACs. For example, 16 TPUs with 12nm eFPGA can do Complex INT16 FFT dynamically switching FFT sizes from 128 to 8K at 2 Gigasamples/second (sample=INT16). Accuracy is very high because all accumulation is done at INT40,” said Cheng Wang, CTO of Flex Logix. “Even ultra low power 40nm can get significant throughput running 4K point complex FFT at 6 Megasamples/second.” Benchmarks are available for all of the process nodes we have silicon for now: 40, 28, 22, 16, 12, 7nm.
Learn more at https://flex-logix.com/inferx-dsp/
InferX hardware can also run AI workloads.
About Flex Logix
Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm, 3nm and 18A in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.
For general information on InferX and EFLX product lines, visit our website at this link.
Copyright 2024. All rights reserved. Flex Logix and EFLX are registered trademarks and INFERX is a trademark of Flex Logix, Inc.
SOURCE Flex Logix Technologies