IDC Announces Winners of Sixth HPC Innovation Excellence Awards

DENVER — (BUSINESS WIRE) — November 19, 2013 — International Data Corporation ( IDC) today announced the sixth round of recipients of the HPC Innovation Excellence Award at the ISC'13 supercomputer industry conference in Leipzig, Germany. Prior winners were announced at the ISC'11, SC'11, ISC'12, SC'12, and ISC'13 supercomputing conferences.

The HPC Innovation Excellence Award recognizes noteworthy achievements by users of high performance computing (HPC) technologies. The program's main goals are: to showcase return on investment (ROI) and scientific innovation success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size businesses (SMBs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

"IDC research has shown that HPC can impact innovation cycles greatly and can potentially generate ROI. The award program aims to collect a large set of success stories across many research disciplines, industries, and application areas," said Chirag Dekate, Research Manager, High-Performance Systems at IDC. "The winners achieved clear success in applying HPC to greatly improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements also directly benefit society."

Winners of the first five rounds of awards, announced in 2011, 2012 and at ISC'13, included 29 organizations from the U.S., 3 each from Italy and the People's Republic of China, 2 each from India and the UK, and 1 each from Australia, Canada, Spain, and Sweden.

The new award winners and project leaders announced at ISC'13 are as follows (contact IDC for additional details about the projects):

  • GE Global Research (U.S.) Using a 40 million CPU hour Department of Energy award, GE Global Research has modeled the freezing behavior of water droplets on six different engineered surfaces under six operating conditions on the hybrid CPU/GPU Titan at Oak Ridge National Lab (ORNL). Through recent advances in the field, including a joint simulation enhancement effort with Oak Ridge National Lab to fully leverage hardware infrastructures, GE Global Research has been able to accelerate simulations by approximately 200-fold compared to even just two years ago. Lead: Masako Yamada
  • The Procter & Gamble Company (U.S.) P&G researchers and collaborators at Temple University developed models at the molecular and mesoscale level to understand complex molecular interactions of full formula consumer products such as shampoos, conditioners, facial creams, laundry detergents, etc. The HPC-driven research helped shed light on the performance of the complex formula interactions versus inferring performance based on isolated calculations. Results of the HPC -driven research led to a better understanding of interfacial phenomena, phase behavior, and the performance of several P&G products. Lead: Kelly L. Anderson
  • National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information (Korea) The EDISON (EDucation and research Integration through Simulation On the Net) Project, funded by the Ministry of Science, ICT and Future Planning, Korea, established an infrastructure on the Web where users across the country could easily access and utilize various engineering/science simulation tools for educational and research purposes. The EDISON project is accelerating research in five key areas: Computational Fluid Dynamics, Computational Chemistry, Nano Physics, Computational Structural Dynamics, and Multi-disciplinary Optimization. The Project utilizes a novel partnership model between the project and the respective domains to develop area-specific simulation tools that make HPC accessible to domain specialists. Lead: Kumwon Cho
  • GE Global Research (U.S.) GE Global Research's work on Large Eddy Simulations (LES) leveraged petascale computing to break barriers in accurately characterizing the key flow physics of multi-scale turbulent mixing in boundary layer and shear flows. Findings from this research will significantly improve the prediction and design capabilities for next-generation aircraft engines and wind turbines, both from demonstrating the viability of LES as a characterization tool and as a source of physics guidance. Lead: Umesh Paliath
  • Spectraseis Inc (U.S.) and CADMOS, University of Lausanne (Switzerland) Researchers doubled both acoustic and elastic solver throughput, at the same time improving code size and maintainability, harnessing the massive parallel computing capabilities of Fermi and Kepler GPUs. With improved efficacies obtained by code redesign for GPU the time to solution was reduced from hours to seconds. The improved capability allowed Spectraseis to move from 2D to 3D and, in several cases, obtain more than 100x speed-up. Lead: Igor Podladtchikov and Yury Podladchikov
  • Intelligent Light (U.S.) Intelligent Light addressed the challenge of high volumes of CFD data using FieldView 14 data management and process automation tools. Intelligent Light contributed results from approximately 100 cases with more than 10,000 time steps each to deliver a complete response to the workshop objectives. A Cray XE6 was used to generate the CFD solutions and perform much of the post-processing. This project successfully demonstrated the value and practicality of using innovative workflow engineering with automation and data management for complex CFD studies. Lead: Dr. Earl P.N. Duque
  • Facebook (U.S.) Facebook manages a social graph that is composed of people, their friendships, subscriptions, and other connections. Facebook modified Apache Giraph to allow loading vertex data and edges from separate sources (GIRAPH-155). Facebook was able to run an iteration of page rank on an actual one trillion edge social graph formed by various user interactions in fewer than four minutes with the appropriate garbage collection and performance tuning. Facebook can now cluster a monthly active user data set of one billion input vectors with 100 features into 10,000 centroids with k-means in less than 10 minutes per iteration. Lead: Avery Ching / Apache Giraph
  • HydrOcean/Ecole Centrale Nantes (France) SPH-flow is an innovative fluid dynamic solver based on a meshless, compressible, and time-explicit approach. SPH-flow solver has been used in several industrial projects, including: impact forces of aircraft and helicopter ditching; free surface simulations of ship wake and wave fields; multiphase emulsion simulations; extreme wave impacts on structures; simulation of hydroplaning of tires; water film around car bodies; and underwater explosions. This project is lead by Dr. Erwan Jacqin , CEO of HydrOcean, a spinoff from Ecole Centrale fluid dynamic lab, and Prof. David Le Touze , in charge of the SPH-flow research team at Ecole Centrale Nantes.
  • Imperial College London and NAG (UK) HPC experts from NAG and Imperial College London have implemented scientifically valuable new functionality and substantial performance improvements in the Incompact3D application. After the improvements, the simulations can now scale to 8000 cores efficiently, with a run time of around 3.75 days (wall-clock time), which is over 6x faster. Furthermore, meshes for new high resolution turbulence mixing and flow control simulations, which use up to 4096*4096*4096 grid points, can now utilize as many as 16384 cores. Lead: NAG HECToR CSE Team
  • Queen Mary University of London and NAG (UK) NAG and Queen Mary University of London made significant improvements to CABARET (Compact Accurate Boundary Adjusting high Resolution Technique) code so that it may be used to solve the compressible Navier-Stokes equations and, in the context of this project, for the investigation of aircraft noise. The newly developed code was validated and tested against the serial code and a parallel efficiency of 72% was observed when using 250 cores of the XT4 part of HECToR with the quad core architecture. Lead: NAG HECToR CSE Team
  • Southern California Earthquake Center (U.S.) SCEC has built a special simulation platform, CyberShake, which uses the time-reversal physics of seismic reciprocity to reduce the computational cost by 1000x. Additionally, the production time for a complete regional CyberShake model at seismic frequencies up to 0.5 Hz has been reduced by 10x, and four new hazard models have been run on NCSA Blue Waters and TACC Stampede. SCEC researchers have developed highly parallel, highly efficient CUDA-optimized wave propagation code, called AWP-ODC-GPU, that achieved a sustained performance of 2.8 Petaflops on ORNL Titan. LEAD: Southern California Earthquake Center Community Modeling Environment Collaboration
  • Princeton University/Princeton Plasma Physics Laboratory (U.S.) Using high-end supercomputing resources, advanced simulations of confinement physics for large-scale MFE plasmas have been carried out for the first time with very high phase-space resolution and long temporal duration to deliver important new scientific insights. This research was enabled by the new GTC-P code, developed to use multi-petascale capabilities on world-class systems such as the IBM BG-Q "Mira" @ ALCF and "Sequoia" @ LLNL. Leads: William Tang, Bei Wang, and Stephane Ethier
  • Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory (U.S.) Researchers at ORNL have used the Titan supercomputer to perform the first simulations of organic solar cell active layers at scales commensurate with actual devices. By modifying the LAMMPS molecular dynamics software to use GPU acceleration on Titan, the researchers were able to perform simulations to study how different polymer blends can be used to alter the device morphology. The new insights will aid in the rational design of cheap solar cells with higher efficiency. Results are published in the journal Physical Chemistry Chemical Physics . Lead: W. Michael Brown and Jack C. Wells
  • Ford Werke GmbH (Germany) Researchers at Ford Werke GmbH have developed and deployed a new CAE process, which enables the optimization of the airflow through the cooling package of a vehicle using complex 3D CFD analysis. The Ford team also demonstrated it could run these complex simulations fast enough to enable their use within the time constraints of a vehicle development project. The team's work on Jaguar at Oak Ridge National Lab will help Ford maximize the effectiveness and fuel efficiency of engine bay designs throughout the company. Lead: Dr. Burkhard Hupertz and Alex Akkerman

1 | 2  Next Page »
Featured Video
Jobs
Senior Firmware Architect - Server Manageability for Nvidia at Santa Clara, California
Sr. Silicon Design Engineer for AMD at Santa Clara, California
Senior Platform Software Engineer, AI Server - GPU for Nvidia at Santa Clara, California
GPU Design Verification Engineer for AMD at Santa Clara, California
CAD Engineer for Nvidia at Santa Clara, California
Design Verification Engineer for Blockwork IT at Milpitas, California
Upcoming Events
DVCon Europe 2023 at Holiday Inn Munich – City Centre Munich Germany - Nov 14 - 15, 2024
SEMI MEMS & Imaging Sensors Summit, at International Conference Center Munich Germany - Nov 14, 2024
SEMI | MSIG MEMS & Imaging Sensors Summit at Munich Germany - Nov 14 - 15, 2024
SEMICON Japan 2024 at Tokyo Big Sight Tokyo Japan - Dec 11 - 13, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise