Arteris IP Ncore® Cache Coherent Interconnect Licensed by Bitmain for Sophon TPU Artificial Intelligence (AI) Chips

Network-on-chip (NoC) interconnect enables faster performance and lower die area for Tensor Processing Unit (TPU) AI/ML applications

CAMPBELL, Calif. –June 9, 2019– Arteris IP, the world’s leading supplier of innovative, silicon-proven  network-on-chip (NoC) interconnect intellectual property, today announced that Bitmain has licensed Arteris  Ncore Cache Coherent Interconnect IP for use in its next-generation  Sophon Tensor Processing Unit (TPU) systems-on-chip (SoCs) for the scalable hardware acceleration of artificial intelligence (AI) and machine learning (ML) algorithms.

Our choice of interconnect IP became more important as we continued to increase the complexity and performance of Sophon AI SoCs. The Arteris Ncore cache coherent interconnect IP allowed us to increase our on-chip bandwidth and reduce die area, while being easy to implement in the backend. The Ncore IP’s configurability helped us optimize the die area of our SoC, which permits us to offer our users more performance at lower cost.”

Bitmain’s Sophon TPU products are some of the first commercially available chips optimized for TPU inference and are offered as individual chips or part of Bitmain-developed systems for use in data centers. They accelerate many deep learning frameworks like Caffe and TensorFlow, and next-generations of the technology will expand the number of supported frameworks while increasing performance and reducing power consumption.

“Our choice of interconnect IP became more important as we continued to increase the complexity and performance of Sophon AI SoCs,” said Haichao Wang, CEO of Bitmain. “The Arteris Ncore cache coherent interconnect IP allowed us to increase our on-chip bandwidth and reduce die area, while being easy to implement in the backend. The Ncore IP’s configurability helped us optimize the die area of our SoC, which permits us to offer our users more performance at lower cost.”

“Bitmain’s choice of Arteris Ncore Cache Coherent Interconnect IP is confirmation of our cache coherent interconnect’s ability to enable novel AI SoC dataflow architectures while exceeding stringent performance, power consumption and die area requirements,” said K. Charles Janac, President and CEO of Arteris IP. “Arteris IP is the only IP company continually providing unique interconnect technologies that accelerate the development of these types of complex machine learning and artificial intelligence chips.”

About Bitmain

Founded in 2013,  Bitmain transforms computing by building industry-defining technology in cryptocurrency, blockchain, and  artificial intelligence (AI). Bitmain leads the industry in the production of integrated circuits for cryptocurrency mining, as well as mining hardware under the  Antminer brand. The company also operates the largest cryptocurrency mining pools worldwide --  Antpool.com and  BTC.com. Bitmain technology supports a wide range of blockchain platforms and startups.

About Arteris IP

Arteris IP provides  network-on-chip (NoC) interconnect IP to accelerate system-on-chip (SoC) semiconductor assembly for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers for customers such as  BaiduMobileyeSamsungHuawei / HiSiliconToshiba and  NXP. Arteris IP products include the  Ncore® cache coherent and  FlexNoC® non-coherent interconnect IP, the  CodaCache® standalone last level cache, and optional  Resilience Package (ISO 26262 functional safety)FlexNoC AI Package, and  PIANO®automated timing closure capabilities. Customer results obtained by using Arteris IP products include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit  www.arteris.com or find us on LinkedIn at  https://www.linkedin.com/company/arteris.




Contact:

Kurt Shuler
Arteris Inc.
+1 408 470 7300
Email Contact

Featured Video
Jobs
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise