d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

New Jayhawk platform capitalizes on innovative energy efficient chiplet interconnects to improve performance and reduce data center energy consumption

SANTA CLARA, Calif. — (BUSINESS WIRE) — January 24, 2023 — Today, d- Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, the industry’s first Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance.

Large transformer models are creating new demands for AI inference at the same time that memory and energy requirements are hitting physical limits. d-Matrix provides one of the first Digital In-Memory Compute (DIMC) based inference compute platforms to come to market, transforming the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI. Improving performance can make energy-hungry data centers more efficient while reducing latency for end users in AI applications.

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

d-Matrix’s novel compute platform uses an ingenious combination of an in-memory compute-based IC architecture, sophisticated tools that integrate with leading ANN models, and chiplets in a block grid formation to support scalability and efficiency for demanding ML workloads. By using a modular chiplet-based approach, data center customers can refresh compute platforms on a much faster cadence using a pre-validated chiplet architecture. To enable this, d-Matrix plans to build chiplets based on both BoW and UCIe based interconnects to enable a truly heterogeneous computing platform that can accommodate 3rd party chiplets.

"d-Matrix has moved quickly to seize the chiplet opportunity, which should give them a first-mover advantage,” said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

The Jayhawk chiplet platform features:

  • 3mm, 15mm, 25 mm trace lengths on organic substrate
  • 16 Gbps/wire high bandwidth throughput
  • 6-nm TSMC process technology
  • <0.5 pJ/bit energy efficiency

Jayhawk is currently available for demos and evaluation. d-Matrix will be showcasing the Jayhawk platform at the Chiplet Summit Jan 24-26 in San Jose, CA.

About d-Matrix

d-Matrix is building a new way of doing datacenter AI inferencing at scale using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency. Learn more at dmatrix.ai.



Contact:

Media Contact
Kristen Caron
kristen.caron@aircoverpr.com
978-407-9283

Featured Video
Jobs
Sr. Silicon Design Engineer for AMD at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
Upcoming Events
SEMICON Europa 2024 at Messe München München Germany - Nov 12 - 15, 2024
DVCon Europe 2023 at Holiday Inn Munich – City Centre Munich Germany - Nov 14 - 15, 2024
SEMI MEMS & Imaging Sensors Summit, at International Conference Center Munich Germany - Nov 14, 2024
SEMI | MSIG MEMS & Imaging Sensors Summit at Munich Germany - Nov 14 - 15, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise