OmniML Together with Intel Unlocks True Potential of Hardware-Efficient AI

SAN JOSE, Calif., Jan. 11, 2023 — (PRNewswire) — OmniML, an enterprise artificial intelligence (AI) software company, today announced a new strategic partnership with Intel to accelerate the development and deployment of AI applications for enterprises of all sizes. The two companies will collaborate on community and customer growth opportunities via the Intel Disruptor Initiative to provide greater access to OmniML's pioneering software platform, Omnimizer®.

OmniML's software platform: Unlocking the true potential of AI on Intel hardware

OmniML in collaboration with Intel has delivered hardware-efficient AI on the latest 4th Gen Intel Xeon processor

The usage of AI has now become ingrained in many people's lives from helping us drive safer, automating mundane tasks, and providing better security. However, getting responsible, accurate, and efficient applications to work in production is still a major challenge for most organizations. One of the major reasons is the increasingly large gaps between machine learning (ML) model training and ML model inferencing, making it difficult to design models that fully utilize the available resources on inference hardware.

To get all the components running smoothly, the ML model design and underlying hardware need to work in sync to deliver superior performance. OmniML and Intel have teamed up to bridge the dividing gap between model training and inferencing by incorporating hardware-efficient AI development from the outset.

To kick off this collaboration, OmniML demonstrated superior performance for one of the most popular language models on Intel platforms. OmniML, using its Omnimizer platform and 4th Gen Intel Xeon Scalable processors and integrated acceleration via Intel Advanced Matrix Extensions (Intel AMX) technology, achieved over 10x speedup in processing words per second over a multi-language DistilBERT1.

"Intel is one of the most forward-looking semiconductor companies in the world. OmniML's strengths lie in our deep understanding of ML model design, optimization, and hardware-aware deployment approach. By bringing together OmniML's Omnimizer ML platform to work in sync with the latest Intel Xeon processor, we have achieved truly amazing performance results starting with DistilBERT and expanding to larger language models shortly." – Di Wu, OmniML Co-Founder, and CEO.

"By collaborating with OmniML, we bring together their expertise in ML model design and optimization with Intel's pioneering processor technology," said Arijit Bandyopadhyay, CTO - Enterprise Analytics & AI, Head of Strategy - Enterprise & Cloud, Data Platforms Group at Intel Corporation. "Utilizing the AI features built into the new 4th Gen Intel Xeon Scalable processor, OmniML can offer amazing AI performances to help organizations deliver reliable and leading edge products. We are excited about this collaboration and how we can help more customers accelerate the adoption of AI technology."

Unlock Hardware-Efficient AI with Ease

Omnimizer® is an ML platform that facilitates and automates machine learning (ML) model design, training, and deployment. It unifies the ML development and deployment workflows to help users identify design flaws and performance bottlenecks to get models into production faster with superior runtime. Omnimizer provides a cloud-native interface to rapidly profile and visualize ML model performance on Intel and other hardware devices to ensure the model is properly adapted to run efficiently. Omnimizer has demonstrated a significant performance boost for many applications in computer vision and natural language processing for multinational corporations and fast-growing start-up companies.

Natural language processing and ways to improve language models' performance are among the most important areas of AI applications. Everyday devices now incorporate AI-based language models as a core feature of their design to provide human-centric and multi-lingual interactions. Many of these language models are based on transformer architecture. Using Omnimizer to increase the efficiency of transformer-based language models opens up a wide range of use cases that weren't possible before and lowers the total cost of ownership when utilizing language models for both on-device AI and cloud inferencing.

The OmniML and Intel collaboration builds upon the strengths of each company that creates a winning combination with OmniML's software-based development platform on top of the latest generation of the Intel Xeon processor family.

About OmniML

OmniML's AI software enables enterprises that work on ML applications, such as mobile, automotive, robotics, and IoT applications to produce smaller, faster, and more energy-efficient ML models for on-device AI. OmniML is backed by established VCs including GGV Capital, Qualcomm Ventures, Foothill Ventures, and others. It is led and founded by world-leading researchers and industry experts including Dr. Song Han, MIT EECS Professor and experienced start-up founder, Dr. Di Wu, former Facebook engineer, and Dr. Huizi Mao, co-inventor of the "deep compression" technology that came out of Stanford University. OmniML solves the gap between AI development and hardware deployment to make AI more accessible everywhere. Founded in 2021, OmniML is headquartered in San Jose, California.

Learn more about OmniML:

1Speedup in performance achieved with Intel 4th Gen Xeon (FP32 to BF16) and further increased using Omnimizer model optimization

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

Cision View original content to download multimedia: https://www.prnewswire.com/news-releases/omniml-together-with-intel-unlocks-true-potential-of-hardware-efficient-ai-301718496.html

SOURCE OmniML Inc.

Contact:
Company Name: OmniML Inc., Intel
Henry Guo
Email Contact

Featured Video
Jobs
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise