FLEX LOGIX BOOSTS AI ACCELERATOR PERFORMANCE AND LONG-TERM EFFICIENCY

MOUNTAIN VIEW, Calif., July 2, 2024 — (PRNewswire) — Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP and reconfigurable DSP/SDR/AI solutions, today announced additional applications for embedded FPGA to improve the value proposition for AI Accelerators.

First, memory bandwidth – the scarcest resource in AI Accelerators, especially in the cloud, where model weights/parameters exceed 100 billion and HBM memory is expensive and scarce. Techniques for saving memory bandwidth are evolving faster than hardware.

"Embedded FPGA (eFPGA) can enable innovations in sub-INT4 data and weight representations (e.g. ternary, 2 bit, 3 bit, mixed or mat-mul free) to be converted on the fly by eFPGA into existing TPUs," said Cheng Wang, Flex Logix CTO & SVP Software + Architecture. "This can also be mixed with innovations in sparsity that can further reduce the memory bandwidth requirements. Aggregate memory bandwidth reduction can be up to 16x."

Second, higher performance. AI models are rapidly evolving. With most TPUs, new operators and activation functions must be handled by a much slower processor. eFPGA can be used to run the new operators and activation functions at much higher performance.

Flex Logix is already using these concepts in its own InferX AI optimized for edge vision AI models and DSP. 

Get more information on EFLX eFPGA and a block diagram of weight memory reduction at www.flex-logix.com.  

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA, DSP/SDR and AI Inference solutions for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable DSP/SDR/AI is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm, 3nm and 18A in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit  https://flex-logix.com.

For general information on InferX and EFLX product lines, visit our website at this link.

Copyright 2024. All rights reserved. Flex Logix and EFLX are registered trademarks and INFERX is a trademark of Flex Logix, Inc.

Cision View original content to download multimedia: https://www.prnewswire.com/news-releases/flex-logix-boosts-ai-accelerator-performance-and-long-term-efficiency-302187642.html

SOURCE Flex Logix Technologies

Contact:
Company Name: Flex Logix Technologies
MEDIA CONTACT: info@flex-logix.com

Featured Video
Jobs
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise