Scientific Achievement

Scientific simulations at the exascale, which require powerful supercomputers capable of 1018 floating point operations per second, are beginning to achieve production quality [1]. In 2022, the Accelerator Technology & Applied Physics Division’s Advanced Modeling Program (AMP) at Berkeley Lab and its collaborators succeeded in running a laser-plasma simulation on an early version of Frontier [3]—the world’s first reported exascale supercomputer—in the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL). The following year, simulations like those conducted in AMP helped ORNL to declare “acceptance” of Frontier into the Department of Energy (DOE) supercomputing pool. This led to a joint publication at the prestigious Supercomputing Conference series [1] and a best-paper nomination.

Significance and Impact

Computer modeling at high fidelity on the world’s largest machines is central to designing advanced particle accelerators—particularly the next generation of colliders—and introducing scalable kinetic modeling to inertial confinement fusion. Developing modern simulation codes using the latest Department of Energy (DOE) supercomputers is core to AMP.

Team science and collaboration

AMP’s approach is based on team science and close collaboration with national and international partners. The program conducts large-scale physics simulations and leads application design and numerical algorithm research for the Beam Plasma & Accelerator Simulation Toolkit. AMP is in a long-standing collaboration with Berkeley Lab’s Applied Mathematics and Computational Research Division (AMCRD), the Scientific Data Division, and the National Energy Research Scientific Computing Center (NERSC). These partners maintain AMReX software (a publicly available framework for building adaptive mesh refinement applications) and scalable data storage methods; they are experts in performance tuning at NERSC’s leading supercomputing center.

Running applications at exascale requires many components to work together beyond these core activities. For example, scalable visualization [2,4], input/output for petabytes of data, hardware vendor-specific numerical solvers, machine-specific performance analysis and tuning, and deployment on novel supercomputers require national and international collaboration.

Research Scientist Axel Huebl (left) and Senior Scientist Jean-Luc Vay from ATAP’s Accelerator Modeling Program (Credit: Berkeley Lab)

In particular, AMP collaborated with OLCF to complete the acceptance testing of Frontier, an exascale supercomputer [1]. AMP supports world-class facilities by providing a critical, real-world application, while collaborative tuning and testing are essential to realizing capable scientific computing infrastructure at DOE.

In 2021/2022, the AMP code WarpX was one of the first codes selected by NERSC for acceptance testing of NERSC’s Perlmutter supercomputer and was also used for Frontier at OLCF. Both supercomputers use different GPUs to accelerate computations, for example, Nvidia GPUs at NERSC and AMD GPUs at OLCF.

Supercomputing 2023

At Supercomputing 2023 (SC23), held in Denver, Colorado, from November 12-17, 2023, AMP researchers further presented progress on collaborations with data/computer scientists for in situ visualization and input/output [2,4]. AMP Research Scientist Axel Huebl chaired the In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization (ISAV) 2023 paper track, underscoring AMP’s deep collaborations and high standing in the high-performance computing community.

In situ processing is performed while the simulation is running, providing orders of magnitude more detailed insights into simulation fidelity than traditional post-processing workflows, which rely on first writing to and then reading back from file storage. The conventional workflow is still important but would be a bandwidth bottleneck for many petabytes of data.

In collaboration with colleagues at Lawrence Livermore National Laboratory, the University of Oregon, and Kitware (all members of the DOE’s Exascale Computing Project ALPINE team, data, and visualization software currently under development), AMP’s WarpX simulations now support quality in situ visualization on thousands of GPUs, the results of which were presented at ISAV [2]. During the event, AMP presented future challenges related to its latest accelerator modeling algorithms for in situ processing on Exascale supercomputers [2,4].

Next-generation modeling

Advanced modeling and collaborations will continue through two DOE Office of Science Scientific Discovery for Advanced Computing (SciDAC) projects led by AMP: the high-energy physics-supported CAMPA project and the Fusion Energy Sciences-supported KISMET project. AMP and its collaborators are already running simulations at Exascale for new science cases, supported by an Advanced Scientific Computing Research Leadership Computing Challenge award on Frontier and by NERSC Energy Research Computing Allocations Process awards for the Perlmutter supercomputer. AMP’s contribution [2] to ISAV23 will motivate future collaborations, showing potential for novel in situ algorithms and visualization that can diagnose particle beams in accelerators and elucidate plasma processes when interacting with the world’s most intense laser pulses.

Inventing, developing, and deploying modeling tools that can compute from laptop to exascale supercomputer and are user-friendly, well-documented, benchmarked, and have scalable data processing, analysis, and visualization are core to AMP’s mission. The program provides innovative and well-tested open-source simulation codes for beam, plasma, and accelerator physicists in the DOE, the U.S. fusion industry, and many important international scientific collaborations.

Contact:  Axel Huebl and Jean-Luc Vay

Researchers: Huebl (AMP); J.-L. Vay (AMP); A. Formenti (AMP); M. Garten (AMP); and A. Myers (AMCRD)

Funding: US Exascale Computing Project (ECP); DOE SC SciDAC-HEP (Collaboration for Advanced Modeling of Particle Accelerators – CAMPA); and DOE SC SciDAC-FES (Kinetic IFE Simulations at Multiscale with Exascale Technologies – KISMET)

Publications:

  1. S. Atchley et al., incl. A. Huebl, A. Myers, and J.-L. Vay. “Frontier: Exploring Exascale,” at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC23), Article no. 52, pp. 1-16, Denver, CO, USA, 2023, doi:10.1145/3581784.3607089 (SC23 Best Paper Finalist)
  2. A. Huebl, A. Formenti, M. Garten, and J.-L. Vay. “State of In Situ Visualization in Simulations: We are fast. But are we inspiring?” at In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization (ISAV23) In conjunction with: The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC23), Denver, CO, USA, 2023
    arXiv:2310.00469 (extended abstract) and doi:10.5281/zenodo.10162643 (slides)
  3. L. Fedeli, A. Huebl, F. Boillod-Cerneux, T. Clark, K. Gott, C. Hillairet, S. Jaure, A. Leblanc, R. Lehe, A. Myers, C. Piechurski, M. Sato, N. Zaim, W. Zhang, J.-L. Vay, and H. Vincenti.”Pushing the Frontier in the Design of Laser-Based Electron Accelerators with Groundbreaking Mesh-Refined Particle-In-Cell Simulations on Exascale-Class Supercomputers,” 2022 ACM Gordon Bell Prize Winner. The International Conference for High Performance Computing, Networking, Storage and Analysis (SC22), ISSN:2167-4337, pp. 25-36, Dallas, TX, USA, 2022, doi:10.1109/SC41404.2022.00008
  4. K. Moreland et al., incl. A. Huebl. “Visualization at the Exascale: Making it all Work with VTK-m,” submitted to the International Journal of High-Performance Computing Applications, 2024.

 

For more information on ATAP News articles, contact caw@lbl.gov.