Arm’s Project Trillium Offers the Industry's Most Scalable, Versatile ML Compute Platform

News Highlights:

  • A new suite of Arm® IP brings machine learning (ML) to edge devices
  • With architectures built for high-performance and efficiency; Arm ML and Object Detection (OD) processors will deliver the best user experiences across the broadest range of applications
  • The new products will enable trillions of ML operations per second on mobile devices

CAMBRIDGE, England — (BUSINESS WIRE) — February 13, 2018 — Arm today announced Project Trillium, a suite of Arm IP including new highly scalable processors that will deliver enhanced machine learning (ML) and neural network (NN) functionality. The current technologies are focused on the mobile market and will enable a new class of ML-equipped devices with advanced compute capabilities, including state-of-the-art object detection.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium,” said Rene Haas, president, IP Products Group, Arm. “New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push the boundaries of what will be possible across a broad range of devices.”

ML technologies today tend to focus on specific device classes or the needs of individual sectors. Arm’s Project Trillium changes that by offering ultimate scalability. While the initial launch focuses on mobile processors, future Arm ML products will deliver the ability to move up or down the performance curve – from sensors and smart speakers, to mobile, home entertainment, and beyond.

Performance

Arm's new ML and object detection processors not only provide a massive efficiency uplift from standalone CPUs, GPUs and accelerators, but they far exceed traditional programmable logic from DSPs.

The Arm ML processor is built from the ground-up, specifically for ML. It is based on the highly scalable Arm ML architecture and achieves the highest performance and efficiency for ML applications:

  • For mobile computing, the processor delivers more than 4.6 trillion operations per second
  • (TOPs) with a further uplift of 2x-4x in effective throughput in real-world uses through intelligent data management
  • Unmatched performance in thermal and cost-constrained environments with an efficiency of over three trillion operations per second per watt (TOPs/W). More details on the Arm ML processor are available on our website.

The Arm OD processor has been designed specifically to efficiently identify people and other objects with virtually unlimited objects per frame:

  • Real-time detection with Full HD processing at 60 frames per second
  • Up to 80x the performance of a traditional DSP, and a significant improvement in detection quality relative to previous Arm technologies. More details on the Arm OD processor are available on our website.

In combination, the Arm ML and OD processors perform even better, delivering a high-performance, power-efficient people detection and recognition solution. Users will enjoy high-resolution, real-time, detailed face recognition on their smart devices delivered in a battery-friendly way.

Arm NN software, when used alongside the Arm Compute Library and CMSIS-NN, is optimized for NNs and bridges the gap between NN frameworks such as TensorFlow, Caffe, and Android NN and the full range of Arm Cortex® CPUs, Arm Mali™ GPUs, and ML processors. Developers get the highest performance from ML applications by being able to fully-utilize underlying Arm hardware capabilities and performance. More details on Arm NN software are available on our website.

The new suite of Arm ML IP will be available for early preview in April of this year, with general availability in mid-2018.

Resources:


1 | 2  Next Page »
Featured Video
Jobs
CAD Engineer for Nvidia at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise