Lightelligence Revolutionizes Big Data Interconnect with World’s First Optical Network-on-Chip Processor

  • Domain-Specific AI Processor Unlocks New Interconnect Paradigm for Data Centers and Other High-Performance Applications, Breaking the “Memory Wall”
  • Utilizes Advanced Vertically Stacked Packaging, Integrating Optical Network-on-Chip and Electronic Integrated Circuits into One Single Package
  • First Public Demonstration at Hot Chips 2023

BOSTON, June 28, 2023 (GLOBE NEWSWIRE) -- Lightelligence, the global leader in photonic computing, today introduced a new big data interconnect paradigm with the launch of Hummingbird™, the world’s first Optical Network-on-Chip (oNOC) processor designed for domain-specific artificial intelligence (AI) workloads.

Hummingbird utilizes advanced vertically stacked packaging technologies to integrate a photonic chip and an electronic chip into one single package serving as the communications network for data centers and other high-performance applications. The first public demonstration of Hummingbird will be at Hot Chips August 27-29 at Stanford University.

“Photonics is the solution to the critical compute scaling problem, which has become pressing as the traditional solutions struggle to keep up with the exponential growth of compute power demand spurred by breakthroughs in the AI industry,” remarks Yichen Shen, CEO of Lightelligence. “Hummingbird demonstrates how the industry can address the scaling problem by incorporating photonic technologies into their next-generation product.”

“Lightelligence is breaking the memory wall with its proprietary photonics technology that could revolutionize the semiconductor industry,” adds Dylan Patel, Chief Analyst at SemiAnalysis.

Hummingbird is Lightelligence’s second product in its photonic computing portfolio. Its Photonic Arithmetic Computing Engine (PACE) platform released in late-2021 fully integrates photonics and electronics in a small form factor leveraging custom 3D packaging and seamless co-design.

Introducing Hummingbird
Hummingbird is the first in a family of products that utilize Lightelligence’s oNOC platform, which significantly improves computing performances by enabling innovative interconnect topologies via silicon photonics. Its waveguides propagate signals at the speed of light and utilize an all-to-all data broadcast network to each core on a 64-core domain-specific AI processor chip, giving Hummingbird significant advantages in latency and power reduction over traditional digital interconnect solutions.

Compute scaling challenges inspired the creation of an optical interconnect solution. Unlike digital networks, Hummingbird’s oNOC technology increases density scaling by enabling interconnect topologies that would be otherwise unrealizable.

In oNOC, power and latency are virtually unaffected by distance, making the technology ideal for developing new and more robust topologies that do not rely on nearest neighbor communication. oNOC topologies like Hummingbird’s enable higher utilization of compute power even in a single electronic IC configuration due to more efficient communication. With oNOC, mapping workloads to hardware becomes easier and provides greater freedom to select the right topology for the computing task.

In Hummingbird, Lightelligence implemented a low-latency optical all-to-all broadcast network spanning 64 cores. With 64 transmitters and 512 receivers, Hummingbird provides a framework to implement a variety of dense optical network topologies.

Hummingbird’s electronic and photonic ICs are co-packaged and integrated into a PCIe form factor ready for installation in industry-standard servers. Coupled with the Lightelligence Software Development Kit (SDK), machine learning and AI workloads can be optimized to take full advantage of the oNOC. oNOC and Hummingbird IP can also be customized for other unique workloads and applications.

Future generations of Hummingbird will employ reticle-stitching to support chiplet architectures to enable better scalability, improve energy efficiency, and further reduce bottlenecks.

Availability and Pricing
Lightelligence is actively signing development partners to sample Hummingbird-based PCIe add-in cards along with Lightelligence's SDK in Q3 2023.

Contact Lightelligence at info@lightelligence.ai for inquiries on pricing and availability. Performance numbers are available to qualified customers upon request.

Lightelligence at Flash Memory Summit and Hot Chips
Lightelligence will exhibit at Flash Memory Summit in Booth #755 August 8-10 at the Santa Clara Convention Center in Santa Clara, Calif.

Lightelligence will demonstrate Hummingbird at Hot Chips 2023 at Stanford University in Palo Alto, Calif., August 27-29. Maurice Steinman, Lightelligence’s Vice President of Engineering, will present “Hummingbird Low-Latency Computing Engine” August 29 at 11:30 a.m. P.D.T.

About Lightelligence
Lightelligence is transforming cutting-edge photonic technology into groundbreaking solutions that offer exponential improvements in computing power and dramatically reduce energy consumption. As the global leader in the photonic computing industry, Lightelligence is to date the only company that has publicly demonstrated integrated silicon photonic computing systems working at speed. Founded in 2017, Lightelligence has approximately 200 employees worldwide and has raised more than $220 million in funding.

Engage with Lightelligence:
Website: www.lightelligence.ai/
Linkedin: https://www.linkedin.com/company/lightelligence-ai/
Twitter: @lightelligence

For more information, contact:
Nanette Collins
Public Relations for Lightelligence
nanette@nvc.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/277a9c9d-a770-4528-998d-26c8c6fcfe6e


Primary Logo

Introducing Hummingbird, the first oNOC processor designed for domain-specific AI workloads.

Lightelligence’s Hummingbird is the first oNOC processor designed for domain-specific AI workloads.
Featured Video
Jobs
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Upcoming Events
Phil Kaufman Award Ceremony and Banquet to be held November 6 at Hayes Mansion at Hayes Mansion 200 Edenvale Ave San Jose CA - Nov 6, 2024
SEMICON Europa 2024 at Messe München München Germany - Nov 12 - 15, 2024
DVCon Europe 2023 at Holiday Inn Munich – City Centre Munich Germany - Nov 14 - 15, 2024
SEMI MEMS & Imaging Sensors Summit, at International Conference Center Munich Germany - Nov 14, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise