The DOE National Laboratories host five of the top 10 HPC systems in the world. The photo above shows Vulcan at LLNL, Trinity at LANL, Titan at ORNL, and Peregrine at NREL.
The HPC4Mtls Program is led by Lawrence Livermore National Laboratory (LLNL) and principal laboratories, Oak Ridge National Laboratory (ORNL), National Energy Technology Laboratory (NETL), and Los Alamos National Laboratory (LANL).
All DOE national laboratories are eligible to participate as partners in this program.
HPC Systems Supporting the HPC4Mtls Program
DOE National Laboratories host 5 of the top 10 HPC systems in the world. The HPC4Mtls Program founding laboratories, LLNL, LANL, and ORNL, host 3 of these systems. Other DOE national labs will be added as the HPC4Mtls Program grows.
Primary HPC Systems
|LLNL||Vulcan||24,576||393,216||IBM Blue Gene/Q; PPC A2|
|Cab||1,296||20,736||Intel Xeon E5-2670|
|ORNL||Titan||18,688||560,640||Cray XK7; Opteron 6274|
|EoS||736||11,776||Cray XC30; Intel Xeon E5-2670|
|NREL||Peregrine||2,592||58,7520||HP Linux Cluster; Intel SandyBridge, IvyBridge, Haswelll|
|NETL||Joule||1,512||24,192||SGI HPC with Intel Sandy Bridge, Linux OS|
The table above shows the primary HPC systems supporting the HPC4Mtls program at the principal labs. Other systems are also available.
The U.S. Department of Energy (DOE) national laboratories actively infuse and maintain a comprehensive ecosystem of leadership-class high performance computing (HPC) assets.
- Massive commodity clusters and advanced, pioneering architectures
- Data-intensive analytics and computing
- Robust, validated application codes
- Scientific and engineering expertise in applying HPC to complex problems across a vast number of domains
America's investment in extreme-scale computational R&D delivers high-value, high-impact results across a spectrum of crucial science and technology challenges in:
- National security
- Applied energy
- Fundamental science
HPC managing Laboratories’ industry outreach: