The combination of WekaFS™ and NVIDIA® enables customers to accelerate AI/ML initiatives that dramatically accelerate their time-to-value and time-to-market
CAMPBELL, Calif. — (BUSINESS WIRE) — June 29, 2021 — WekaIO™ (Weka), one of the fastest-growing data platforms for artificial intelligence/machine learning (AI/ML), life sciences research, and high-performance computing (HPC), today announced the WekaFS™ Limitless Data Platform’s ability to leverage the newly-turbocharged NVIDIA HGX™ AI supercomputing platform to deliver the best performance and throughput required of today’s most-demanding enterprise workloads.
The NVIDIA HGX platform supports three key technologies: the NVIDIA A100 80GB GPU; NVIDIA NDR 400G InfiniBand networking; and NVIDIA Magnum IO GPUDirect® Storage software.
Weka’s launch-day support of NVIDIA’s next-generation technologies is the result of a close collaborative relationship between the companies and the continued commitment by Weka to provide customers with the latest features needed to best help store, protect, and manage their data.
WekaFS™ recently delivered among the greatest aggregate NVIDIA Magnum IO GPUDirect® Storage throughput numbers of all storage systems tested at Microsoft Research. Tests were run on a system that has WekaFS deployed in conjunction with multiple NVIDIA DGX-2™ servers in a staging environment and allowed engineers to achieve the highest throughput of any storage solution that had been tested to date.i This high-level performance was achieved and verified by running the NVIDIA gdsio utility as a stress test and it showed sustained performance over the duration of the test.
“Advanced AI development requires powerful computing, which is why NVIDIA works with innovative solution providers like WekaIO,” said Dion Harris at NVIDIA. “Weka’s support for the NVIDIA HGX reference architecture and NVIDIA Magnum IO ensures customers can quickly deploy world-leading infrastructure for AI and HPC workloads across a growing number of use cases.”
Different stages within AI data pipelines have distinct storage requirements for massive ingest bandwidth, need mixed read/write handling and ultra-low latency, and often result in storage silos for each stage. This means business and IT leaders must reconsider how they architect their storage stacks and make purchasing decisions for these new workloads. Leveraging the upgrades to NVIDIA’s platform, customers are better able to take advantage of significant performance gains with far less overhead, process data faster while minimizing power consumption, and increase AI/ML workloads while achieving improved accuracy.
Using WekaFS in conjunction with the new additions to the NVIDIA HGX AI supercomputing platform provides channel partners with the opportunity to deploy high-performance solutions to a growing number of industries that require fast access to data – whether on premises or in the cloud. Industries we serve that require an HPC storage system that can respond to application demands include manufacturing, life sciences, scientific research, energy exploration and extraction, and financial services.
“NVIDIA is not only a technology partner to Weka but also an investor,” said Liran Zvibel, Co-founder and CEO at WekaIO. “As such, we work closely to ensure that our Limitless Data Platform supports and incorporates the latest improvements from NVIDIA as they become available. We have overcome significant challenges to modern enterprise workloads through our collaborations with leading technology providers like NVIDIA and are looking forward to continuing our fruitful relationship into the future.”
WekaFS is the world’s fastest and most scalable POSIX-compliant parallel file system, designed to transcend the limitations of legacy file systems that leverage local storage, NFS, or block storage, making it ideal for data-intensive AI and high-performance computing (HPC) workloads. WekaFS is a clean-sheet design integrating NVMe-based flash storage for the performance tier with GPU servers, object storage, and ultra-low latency interconnect fabrics into an NVMe-over-Fabrics architecture, creating an extremely high-performance scale-out storage system. WekaFS performance scales linearly as more servers are added to the storage cluster allowing the infrastructure to grow with the increasing demands of the business.
Additional resources:
- NVIDIA GPUDirect® Storage Plus WekaIO™ Provides More Than Just Performance
- Microsoft Research Customer Use Case: WekaIO and NVIDIA GPUDirect Storage Results with NVIDIA DGX-2 Servers
- Weka AI and NVIDIA A100 Reference Architecture
- Weka AI and NVIDIA accelerate AI data pipelines
About WekaIO
WekaIO (Weka) is used by eight of the Fortune 50 enterprise organizations to uniquely solve the newest, biggest problems holding back innovation and discovery. Weka solutions are purpose-built to future-ready the accelerated and agile data center. Optimized for NVMe-flash and the hybrid cloud, its advanced architecture handles the most demanding storage challenges in the most data-intensive technical computing environments, delivering truly epic performance at any scale, enabling organizations to maximize the full value of their data center investments. Weka helps the enterprise solve big IT infrastructure problems to accelerate business outcomes and speed productivity. For more information, go to https://www.weka.io.
Follow WekaIO: Twitter, LinkedIn, and Facebook
WekaIO, WekaFS, Weka AI, Weka Innovation Network, Weka Within, Weka AI logo, WIN logo, Weka Within logo, and the WekaIO logo are trademarks of WekaIO, Inc.
i Performance numbers shown here with NVIDIA GPUDirect Storage on NVIDIA DGX A100 slots 0-3 and 6-9 are not the officially supported network configuration and are for experimental use only. Sharing the same network adapters for both compute and storage may impact the performance of standard or other benchmarks previously published by NVIDIA on DGX A100 systems.
View source version on businesswire.com: https://www.businesswire.com/news/home/20210629005396/en/