EdgeCortix Launches SAKURA-II Platform to Power the Next Wave of Generative AI at the Edge

The next-generation high performance, energy efficient Edge AI accelerator addresses the latest Generative AI solutions at the edge from vision to billions of parameters large language models

TOKYO — (BUSINESS WIRE) — May 21, 2024EdgeCortix® Inc., a leading fabless semiconductor company specializing in energy-efficient AI processing at the edge, today unveiled its next-generation SAKURA-II Edge AI accelerator.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240521476499/en/

(Photo: Business Wire)

(Photo: Business Wire)

This state-of-the-art platform, paired with EdgeCortix’s innovative second generation Dynamic Neural Accelerator (DNA) architecture, is engineered to tackle the most challenging Generative AI tasks in the industry. Designed for flexibility and power efficiency, SAKURA-II empowers users to seamlessly manage a wide range of complex tasks including Large Language Models (LLMs), Large Vision Models (LVMs), and multi-modal transformer-based applications, even within the stringent environmental constraints at the edge. Featuring low latency, best-in-class memory bandwidth, high accuracy, and compact form factors, SAKURA-II delivers unparalleled performance and cost-efficiency across the diverse spectrum of edge AI applications.

Well-suited for numerous use cases across the manufacturing, industry 4.0, security, robotics, aerospace, and telecommunications industries, SAKURA-II features EdgeCortix’s latest generation runtime reconfigurable neural processing engine, DNA-II. Leveraging this highly configurable intellectual property block, SAKURA-II delivers power efficiency and real-time processing capabilities while simultaneously executing multiple deep neural network models with low latency. SAKURA-II can deliver up to 60 trillion operations per second (TOPS) of effective 8-bit integer performance and 30 trillion 16-bit brain floating-point operations per second (TFLOPS), while also supporting built-in mixed precision for handling the rigorous demands of next-generation AI tasks.

The SAKURA-II platform, with its sophisticated MERA software suite, features a heterogeneous compiler platform, advanced quantization, and model calibration capabilities. This software suite includes native support for leading development frameworks such as PyTorch, TensorFlow Lite, and ONNX. MERA's flexible host-to-accelerator unified runtime is adept at scaling across single, multi-chip, and multi-card systems at the edge, significantly streamlining AI inferencing and shortening deployment times for data scientists. Furthermore, the integration with the MERA Model Library, with seamless interface to Hugging Face Optimum, offers users access to an extensive range of the latest transformer models, ensuring a smooth transition from training to edge inference.

"SAKURA-II's impressive 60 TOPS performance within 8W of typical power consumption, combined with its mixed-precision and built-in memory compression capabilities, positions it as a pivotal technology for the latest Generative AI solutions at the edge," said Sakyasingha Dasgupta, CEO and Founder of EdgeCortix. "Whether running traditional AI models or the latest Llama 2/3, Stable-diffusion, Whisper or Vision-transformer models, SAKURA-II provides deployment flexibility at superior performance per watt and cost-efficiency. We are committed to ensuring we meet our customer’s varied needs and also to securing a technological foundation that remains robust and adaptable within the swiftly evolving AI sector."

Key Benefits of SAKURA-II include:

  • Optimized for Generative AI: Tailored specifically for processing Generative AI workloads at the edge with minimal power consumption.
  • Complex Model Handling: Capable of managing multi-billion parameter models like Llama 2, Stable Diffusion, DETR, and ViT within a typical power envelope of 8W.
  • Seamless Software Integration: Fully compatible with EdgeCortix’s MERA software suite, facilitating seamless transitions from model training to deployment.
  • Enhanced Memory Bandwidth: Offers up to four times more DRAM bandwidth than competing AI accelerators, ensuring superior performance for LLM and LVM.
  • Real-Time Data Streaming: Optimized for low-latency operations under real-time data streaming conditions.
  • Advanced Precision: Provides software-enabled mixed-precision support for near FP32 accuracy.
  • Sparse Computation: Supports sparse computation to reduce memory footprint and optimize bandwidth.
  • Versatile Functionality: Supports arbitrary activation functions with hardware approximation for enhanced adaptability.
  • Efficient Data Handling: Includes a dedicated Reshaper engine to manage complex data permutations on-chip and minimize host CPU load.
  • Power Management: Features on-chip power-gating and power management capabilities to facilitate ultra-high efficiency modes.

SAKURA-II will be offered as a stand-alone device, two different M.2 modules with varying DRAM capacity, single and dual-device low-profile PCIe cards. Customers can reserve M.2 modules and PCIe cards today for delivery in the second half of 2024.

Reserve SAKURA-II accelerators today by registering here.

About EdgeCortix

Pioneering the future of the connected intelligent edge, EdgeCortix is a fabless semiconductor company focused on energy-efficient AI processing of Generative-AI workloads at the edge. Founded in 2019 with R&D headquarters in Tokyo, Japan, EdgeCortix delivers a software-first approach with its patented “hardware and software co-exploration” system to design an AI specific runtime reconfigurable accelerator processor from the ground up. EdgeCortix’s products disrupt the rapidly growing edge AI hardware markets including defense, aerospace, smart cities, industry 4.0, autonomous vehicles and robotics.

For more information about EdgeCortix and SAKURA-II, visit https://www.edgecortix.com/en/.



Contact:

Racepoint Global for EdgeCortix
edgecortix@racepointglobal.com

Featured Video
Jobs
GPU Design Verification Engineer for AMD at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise