Hewlett Packard Enterprise (HPE) and the Argonne Management Computing Facility (ALCF), a U.S. Office of Energy (DOE) Business of Science User Facility, now introduced that ALCF will deploy the new Cray ClusterStor Eone thousand, the most efficient parallel storage resolution, as its latest storage system. The new collaboration supports ALCF’s scientific research in regions this kind of as earthquake seismic activity, aerospace turbulence and shock-waves, phys ical genomics and more. The most up-to-date deployment improvements storage ability for ALCF’s workloads that involve converged modeling, simulation, synthetic intelligence (AI) and analytics workloads, in preparing for Aurora, ALCF’s forthcoming exascale supercomputer, powered by HPE and Intel, and the to start with-of-its-form anticipated to be sent in the U.S. in 2021.
The Cray ClusterStor Eone thousand system utilizes function-constructed software program and components characteristics to meet substantial-functionality storage demands of any dimensions with considerably less drives. Built to assist the Exascale Era, which is characterized by the explosion of data and converged workloads, the Cray ClusterStor Eone thousand will power ALCF’s potential Aurora supercomputer to focus on a multitude of data-intensive workloads essential to make breakthrough discoveries at unparalleled pace.
“ALCF is fully commited to producing new experiences with Exascale Era technologies by deploying infrastructure essential for converged workloads in modeling, simulation, AI and analytics,” mentioned Peter Ungaro, senior vice president and typical supervisor, HPC and AI, at HPE. “Our current introduction of the Cray ClusterStor Eone thousand is delivering ALCF unmatched scalability and functionality to meet future-generation HPC storage demands to assist emerging, data-intensive workloads. We glimpse ahead to continuing our collaboration with ALCF and empower its research group to unlock new value.”
ALCF’s two new storage methods, which it has named “Grand” and “Eagle,” are working with the Cray ClusterStor Eone thousand system to get a fully new, charge-powerful substantial-functionality computing (HPC) storage resolution to correctly and effectively handle developing converged workloads that today’s choices are unable to assist.
“When Grand launches, it will benefit ALCF’s legacy petascale equipment, offering improved ability for the Theta compute system and enabling new degrees of functionality for not just standard checkpoint-restart workloads, but also for advanced workflows and metadata-intensive get the job done,” mentioned Mark Fahey, director of functions, ALCF.
“Eagle will help assist the at any time-increasing relevance of data in the day-to-day actions of science,” mentioned Michael E. Papka, director, ALCF. “By leveraging our practical experience with our latest data-sharing system, Petrel, this new storage will help eradicate limitations to productiveness and make improvements to collaborations throughout the research group.”
The two new methods will get a full of two hundred petabyes (PB) of storage ability, and by way of the Cray ClusterStor Eone thousand’s intelligent software program and components models, will more properly align data flows with focus on workloads. ALCF’s Grand and Eagle methods will help researchers speed up a array of scientific discoveries across disciplines, and are each individual assigned to handle the adhering to:
- Computational ability – ALCF’s “Grand” provides a hundred and fifty PB of center-broad storage and new degrees of input/output (I/O) functionality to assist large computational demands for its consumers.
- Simplified data-sharing – ALCF’s “Eagle” gives a 50 PB community file system to make data-sharing less complicated than at any time among ALCF users, their collaborators and with third parties.
ALCF plans to deliver its Grand and Eagle storage methods in early 2020. The methods will to begin with hook up to existing ALCF supercomputers powered by HPE HPC systems: Theta, based mostly on the Cray® XCforty-AC™ and Cooley, based mostly on the Cray CS-300. ALCF’s Grand, which is capable of 1 terabyte per second (TB/s) bandwidth, will be optimized to assist converged simulation science and data-intensive workloads once the Aurora exascale supercomputer is operational.