New NVIDIA Data Center Inference Platform to Fuel Next Wave of AI-Powered Services

— Kash Iftikhar, vice president of product development, Oracle Cloud Infrastructure

“Supermicro is innovating to address the rapidly emerging high-throughput inference market driven by technologies such as 5G, Smart Cities and IOT devices, which are generating huge amounts of data and require real-time decision making. We see the combination of NVIDIA TensorRT and the new Turing architecture-based T4 GPU accelerator as the ideal combination for these new, demanding and latency-sensitive workloads and plan to aggressively leverage them in our GPU system product line.”
— Charles Liang, president and CEO, Supermicro

Keep Current on NVIDIA
Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.

About NVIDIA
NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

For further information, contact:
Kristin Bryson
PR Director for Data Center AI, HPC and Accelerated Computing
NVIDIA Corporation
+1-203-241-9190
Email Contact

Certain statements in this press release including, but not limited to, statements as to: Tesla T4 GPU and TensorRT software enabling intelligent voice, video, image and recommender services; NVIDIA’s AI data center platform delivering the industry’s most advanced inference acceleration for voice, video, image and recommendation services; the benefits, performance and abilities of the NVIDIA TensorRT Hyperscale Inference Platform, including Tesla T4 GPUs based on Turing architecture and new inference software, its ability to deliver faster performance at lower latency than other offerings, and enabling hyperscale data centers to offer new services; customers racing toward a future where every product and service will be touched and improved by AI and the Tensor RT Hyperscale Platform being built to bring this to a reality faster and more efficiently than previously thought possible; the value the estimated AI inference industry will grow to in the next five years; the performance and features of Tesla T4 GPUs; NVIDIA TensorRT 5 expanding the set of neural network optimizations for mixed precision workloads; NVIDIA TensorRT inference server enabling applications to use AI models, its availability from the NVIDIA GPU Cloud container registry and its ability to maximize GPU utilization; NVIDIA GPUs enabling Microsoft to reduce object detection latency for images and Microsoft looking forward to working with NVIDIA’s next-generation inference hardware and software to expand the way people benefit from AI products and services; Google Cloud planning to add support for Tesla T4 GPUs on the Google Cloud Platform soon; AI becoming increasingly pervasive, and inference being a critical capability customers need to deploy AI models; major server manufacturers voicing their support for the NVIDIA TensorRT Hyperscale Platform; NVIDIA Tesla T4 GPUs giving Cisco customers access to the most efficient accelerator for AI inference workloads; Dell EMC enhancing the PowerEdge server portfolio to help customers and its collaboration with NVIDIA playing a vital role in helping its customers; Fujitsu’s plan to incorporate Tesla T4 GPUs into its systems lineup and providing its customers with servers optimized for their growing AI needs; HPE using Tesla T4 GPUs to continue to modernize and accelerate the data center to enable inference at the edge; IBM’s plans to explore the Tesla T4 GPU accelerator to extend its state of the art leadership for inference workloads; Kubernetes integrating NVIDIA products with Kubeflow and providing ways to deploy AI inference across diverse infrastructures; NVIDIA TensorRT inference server features enabling faster deployment of AI applications and improving infrastructure utilization; Supermicro innovating in markets which generate data and require real-time decision making and their plans to leverage NVIDIA products in their GPU system product line are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

© 2018 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA, NVIDIA Turing, NVLink, TensorRT and Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

A photo accompanying this announcement is available at http://www.globenewswire.com/NewsRoom/AttachmentNg/31016d3d-3f4c-413b-874b-e770338718f0

NVIDIA-logo.jpg



« Previous Page 1 | 2             
Featured Video
Jobs
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
Upcoming Events
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024
PDF Solutions AI Executive Conference at St. Regis Hotel San Francisco - Dec 12, 2024
DVCon U.S. 2025 at United States - Feb 24 - 27, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise