Oak Ridge Nationwide Laboratory’s exascale Frontier supercomputer – the primary public exascale system on this planet – debuted nearly a 12 months in the past. Now, increasingly high-profile makes use of instances on Frontier are beginning to emerge. Beneath, we’re together with a weblog submit from the group at ORNL that highlights new cosmological codes which have been run on the groundbreaking system. You will discover the unique submit on the ORNL web site right here.
A trio of latest and improved cosmological simulation codes was unveiled in a sequence of displays on the annual April Assembly of the American Bodily Society in Minneapolis. Chaired by the Oak Ridge Management Computing Facility’s director of science, Bronson Messer, the session protecting these next-generation codes heralds a brand new period of exascale computational astrophysics that guarantees to advance our understanding of the universe with fashions of unprecedented scale and determination.
Powered by the incoming technology of exascale — a billion-billion floating level operations per second — supercomputers, the up to date variations of Cholla, HACC and Parthenon are the end result of years of labor by builders to arrange their codes for exascale’s thousandfold enhance from petascale computing velocity. With their profitable early runs on the OLCF’s Frontier supercomputer, situated on the Division of Vitality’s Oak Ridge Nationwide Laboratory, the codes are able to discover digital domains of the cosmos that have been beforehand past science’s attain.
“These newly improved astrophysical codes present among the clearest demonstrations of essentially the most empowering options of exascale computing for science,” stated Messer, a computational astrophysicist, distinguished scientist at ORNL and member of the group that gained a 2022 R&D 100 Award for the Flash-X software program. “All these groups are simulating an array of bodily processes taking place on scales ranging over many orders of magnitude — from the dimensions of stars to the dimensions of the universe — whereas incorporating suggestions between one set of physics to others and vice versa. They signify among the most difficult issues that might be attacked on Frontier, and I anticipate the outcomes to be remarkably impactful.”
HACC/CRK-HACC
HACC, for {Hardware}/Hybrid Accelerated Cosmology Code, is a veteran simulator of the cosmos that focuses on large-scale construction formation at midnight sector, which incorporates darkish power, darkish matter, neutrinos and the origins of primordial fluctuations.
HACC’s origins date again to the Roadrunner supercomputer at Los Alamos Nationwide Laboratory, which was the primary machine to interrupt the petaflop barrier — 1,000,000 billion floating-point operations per second — in 2008. At present being developed by researchers at Argonne Nationwide Laboratory with assist from DOE’s Exascale Computing Undertaking, or ECP, HACC has been optimized for Frontier’s AMD Intuition™ GPU accelerators, and optimizations for Argonne’s Aurora supercomputer and its Intel GPUs are within the works.
With growth assist from the ECP’s ExaSky undertaking, HACC leverages exascale’s elevated computing talents by packing in additional physics fashions than the unique code’s gravity solver. As survey information of the universe turns into extra detailed and sophisticated, the extra subtle such simulation instruments should turn out to be to maintain tempo. Astrophysicists use observations to validate the digital mock-ups of the universe, constraining parameters used within the simulations; if their measurements don’t match the simulation’s, then there’s a disparity to resolve.
Certainly one of HACC’s largest objectives is to offer survey-scale mock catalogs for present large-scale, large-structure formation surveys such because the Rubin Observatory LSST, SPHEREx, and CMB-S4.
“With the ability to mock these surveys requires an amazing quantity of quantity to simulate and lots of physics to compute. And none of this stuff are achievable with the earlier technology of supercomputers,” stated Nicholas Frontiere, a computational scientist at Argonne and co-team chief for CRK-HACC growth, which provides hydrodynamics modeling. “It’s solely on the exascale regime which you can actually begin simulating the volumes which might be required for a lot of these surveys.”
HACC’s future sounds fairly simple: extra is best.
“The following horizon for us is together with increasingly detailed astrophysics in our simulations in order that even with the identical volumes and simulations, you may get higher decision,” Frontiere stated. “So, most of our analysis is de facto including extra physics, which is one thing we might by no means have been in a position to take into account with out operating on the scales we are actually.”
Cholla
Initially developed in 2014 by an astrophysics doctoral scholar on the College of Arizona, the GPU-accelerated fluid dynamics solver Cholla , for Computational Hydrodynamics On ParaLLel Architectures, was supposed to assist customers higher perceive how the universe’s gases evolve over time. That scholar, Evan Schneider, is now an assistant professor within the College of Pittsburgh’s Division of Physics and Astronomy, and Cholla has turn out to be an astrophysics powerhouse.
Schneider intends to make use of Cholla to simulate a complete galaxy the dimensions of the Milky Manner on the scale of a single star cluster; modeling a large galaxy at this decision can be a primary for computational astrophysics. Doing so would require extra than simply optimizing the code to run on Frontier, an effort that was supported by the Frontier Middle for Accelerated Software Readiness, or CAAR , program.
Cholla has additionally attracted serving to palms on its method to exascale — specifically, these of Bruno Villasenor, who was learning darkish matter as a doctoral scholar on the College of California, Santa Cruz. He and his Ph.D. adviser, Brant Robertson, determined to make use of Cholla for his or her simulations of the Lyman-Alpha Forest, which is a sequence of absorption options fashioned as the sunshine from distant quasars encounters materials alongside its journey to Earth. However to take action required a number of extra physics fashions. So, Villasenor built-in them into Cholla.
“Bruno added gravity, added particles, and added cosmology in order that we may do these massive cosmological bins. And so that actually modified Cholla from a pure fluid dynamics code into an astrophysics code,” Schneider stated.
Now, with its new capabilities powered by Frontier’s exascale velocity, Cholla is poised to perform breakthrough work that was inconceivable on earlier methods.
“Decision is the secret. The holy grail for me is to have the ability to run a simulation of a Milky Manner-sized galaxy with particular person supernova explosions resolved. And to date, folks have solely been in a position to do this for tiny galaxies since you should have a high-enough decision to cowl your entire disk at one thing like a parsec scale,” Schneider stated. “It sounds easy as a result of it’s simply the distinction between operating a simulation with 4,000-cubed cells and a simulation with 10,000-cubed cells, however that’s roughly 60 billion cells to 1 trillion cells, whole. You really want the bounce to exascale to have the ability to try this.”
That leap is about to occur, and Schneider and her group can’t wait to get began.
“It’s actually thrilling to work on constructing one thing for a very long time after which lastly with the ability to see it at scale. All people’s simply excited to see what we’re going to have the ability to do,” Schneider stated.
Parthenon
At its core, the open-source Parthenon is an adaptive mesh refinement code for grid-based simulations with the power to refine decision solely in a sure area of a simulation grid to extend the velocity and accuracy of its calculations. Its growth group, together with Forrest Glines, a Metropolis Postdoctoral Fellow at Los Alamos, and Philipp Grete, a Marie Skłodowska-Curie Actions Postdoctoral Fellow on the Hamburg Observatory, makes use of Parthenon in its personal code, known as AthenaPK, to simulate completely different astrophysical methods — primarily turbulence and suggestions from energetic galactic nuclei, or AGN, jets.
However what makes Parthenon distinctive in exascale-class computational astrophysics is its efficiency portability by way of Kokkos, which permits Parthenon to function a framework for different fluid dynamics codes to leverage mesh refinement it doesn’t matter what structure they’re operating on — NVIDIA GPUs, AMD GPUs, Intel GPUs, Arm CPUs or simply conventional CPUs.
“Parthenon’s efficiency portability permits researchers to run on any supercomputer platform that the underlying Kokkos framework helps. Builders don’t have to fret about reimplementing their simulation code for every new platform,” Glines stated. “The quicker codes pushed by Parthenon enable extra simulations with larger decision and thus higher-fidelity fashions of the bodily methods they’re learning.”
Parthenon is already being utilized in quite a lot of codes, together with Phoebus, which is a normal relativistic magnetohydrodynamics, or GRMHD, code being developed at Los Alamos Nationwide Lab, and KHARMA, which is one other GRMHD code being developed on the College of Illinois Urbana-Champaign. KHARMA was already utilized in an Progressive and Novel Computational Impression on Idea and Experiment, or INCITE, undertaking final 12 months.
In the meantime, the group’s AthenaPK software program is being utilized in a 2023 INCITE undertaking on Frontier to review “suggestions and energetics from magnetized AGN jets in galaxy teams and clusters.”
“We’re notably enthusiastic about our personal undertaking as a result of with out Parthenon and with out AthenaPK, the computational physics problem — speaking about resolving each the jet and the encircling diffuse plasma at sufficiently excessive decision to review self-regulation — wouldn’t have been attainable on some other machine or with some other code that we’re conscious of proper now,” Grete stated.
UT-Battelle manages ORNL for DOE’s Workplace of Science, the one largest supporter of fundamental analysis within the bodily sciences in the US. DOE’s Workplace of Science is working to deal with among the most urgent challenges of our time. For extra data, go to https://power.gov/science.
This text was initially posted on ORNL’s web page and is accessible right here.