Intel Offers Broad Portfolio Spanning Data Center to IoT Devices and Software to Make AI Foundational to Business and Society
NEWS HIGHLIGHTS
- Intel announces AI strategy to drive breakthrough performance, democratize access and maximize societal benefits.
- Intel introduces industry’s most comprehensive data center compute portfolio for AI: the new Intel® Nervana™ platform.
- Intel aims to deliver up to 100x reduction in the time to train a deep learning model over the next three years compared to GPU solutions.
- Intel reinforces commitment to an open AI ecosystem through an array of developer tools built for ease of use and cross-compatibility, laying the foundation for greater innovation.
SAN FRANCISCO — (BUSINESS WIRE) — November 17, 2016 — Intel Corporation today announced a range of new products, technologies and investments from the edge to the data center to help expand and accelerate the growth of artificial intelligence (AI). Intel sees AI transforming the way businesses operate and how people engage with the world. Intel is assembling the broadest set of technology options to drive AI capabilities in everything from smart factories and drones to sports, fraud detection and autonomous cars.
At an industry gathering led by Intel CEO Brian Krzanich, Intel shared how both the promise and complexities of AI require an extensive set of leading technologies to choose from and an ecosystem that can scale beyond early adopters. As algorithms become complex and required data sets grow, Krzanich said Intel has the assets and know-how required to drive this computing transformation.
In a blog Krzanich said: “Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society.”
Intel’s Robust AI Platform
Intel announced plans to usher in the industry’s most comprehensive portfolio for AI – the Intel® Nervana™ platform. Built for speed and ease of use, the Intel Nervana portfolio is the foundation for highly optimized AI solutions, enabling more data professionals to solve the world’s biggest challenges on industry standard technology.
Today, Intel powers 97 percent of data center servers running AI workloads and offers the most flexible, yet performance-optimized, portfolio of solutions. This includes Intel® Xeon® processors and Intel® Xeon Phi™ processors to more workload-optimized accelerators, including FPGAs (field-programmable gate arrays) and the technology innovations acquired from Nervana.
Intel also provided details of where the breakthrough technology from Nervana will be integrated into the product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unprecedented compute density with a high-bandwidth interconnect.
“We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks,” said Diane Bryant, executive vice president and general manager of the Data Center Group at Intel. “Before the end of the decade, Intel will deliver a 100-fold increase in performance that will turbocharge the pace of innovation in the emerging deep learning space.”
Bryant also announced that Intel expects the next generation of Intel Xeon Phi processors (code-named “Knights Mill”) will deliver up to 4x better performance1 than the previous generation for deep learning and will be available in 2017. In addition, Intel announced it is shipping a preliminary version of the next generation of Intel Xeon processors (code-named “Skylake”) to select cloud service providers. With AVX-512, an integrated acceleration advancement, these Intel Xeon processors will significantly boost the performance of inference for machine learning workloads. Additional capabilities and configurations will be available when the platform family launches in mid-2017 to meet the full breadth of customer segments and requirements.
Enabling AI Everywhere and Cloud Alliance with Google*
Aside from silicon, Intel highlighted other AI assets, including Intel Saffron Technology™, a leading solution for customers looking for business insights. The Saffron Technology platform leverages memory-based reasoning techniques and transparent analysis of heterogeneous data. This technology is also particularly well-suited to small devices, making intelligent local analytics possible across IoT and helping advance state-of the-art collaborative AI.
To simplify deployment everywhere, Intel also delivers common,
intelligent APIs that extend across Intel’s distributed portfolio of
processors from edge to cloud, as well as embedded technologies such as
Intel® RealSense™ cameras and Movidius* vision processing units (VPUs).