Session 1 – Preparation and optimization of HPC codes to Exascale
GC11-solidearth-65 | Orals | Session 1
Enabling End-to-End Accelerated Multiphysics Simulations in the Exascale Era Using PETScRichard Tran Mills, Mark Adams, Satish Balay, Jed Brown, Jacob Faibussowitsch, Matthew G. Knepley, Scott Kruger, Hannah Morgan, Todd Munson, Karl Rupp, Barry Smith, Stefano Zampini, Hong Zhang, and Junchao Zhang
The Portable Extensible Toolkit for Scientific Computation (PETSc) library provides scalable solvers for nonlinear time-dependent differential and algebraic equations and for numerical optimization; it is used in dozens of scientific fields, and has been an important building block for many computational geoscience applications. Starting from the terascale era in the 1990s and continuing into the present dawn of the exascale era, a major goal of PETSc development has been achieving the scalability required to fully utilize leadership-class supercomputers. We will describe some of the algorithmic developments made during the era in which achieving inter-node scalability was the primary challenge to enabling extreme-scale computation, and then survey the challenges posed by the current era in which harnessing the abundant fine-scale parallelism within compute nodes -- primarily in the form of GPU-based accelerators -- has assumed at least equal importance. We will discuss how the PETSc design for performance portability addresses these challenges while stressing flexibility and extensibility by separating the programming model used by application code from that used by the library. Additionally, we will discuss recent developments in PETSc's communication module, PetscSF, that enable flexibility and scalable performance across large GPU-based systems while overcoming some of the difficulties posed by working directly with the Message Passing Interface (MPI) on such systems. A particular goal of this talk will be to go beyond simply describing the work performed to prepare PETSc and simulation codes that rely on it to run on exascale-class systems, but to enumerate the challenges we encountered and to share the essential lessons learned that can help other developers to prepare and optimize their high-performance scientific computing codes for the exascale era.
How to cite: Tran Mills, R., Adams, M., Balay, S., Brown, J., Faibussowitsch, J., G. Knepley, M., Kruger, S., Morgan, H., Munson, T., Rupp, K., Smith, B., Zampini, S., Zhang, H., and Zhang, J.: Enabling End-to-End Accelerated Multiphysics Simulations in the Exascale Era Using PETSc, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-65, https://doi.org/10.5194/egusphere-gc11-solidearth-65, 2023.
The Portable Extensible Toolkit for Scientific Computation (PETSc) library provides scalable solvers for nonlinear time-dependent differential and algebraic equations and for numerical optimization; it is used in dozens of scientific fields, and has been an important building block for many computational geoscience applications. Starting from the terascale era in the 1990s and continuing into the present dawn of the exascale era, a major goal of PETSc development has been achieving the scalability required to fully utilize leadership-class supercomputers. We will describe some of the algorithmic developments made during the era in which achieving inter-node scalability was the primary challenge to enabling extreme-scale computation, and then survey the challenges posed by the current era in which harnessing the abundant fine-scale parallelism within compute nodes -- primarily in the form of GPU-based accelerators -- has assumed at least equal importance. We will discuss how the PETSc design for performance portability addresses these challenges while stressing flexibility and extensibility by separating the programming model used by application code from that used by the library. Additionally, we will discuss recent developments in PETSc's communication module, PetscSF, that enable flexibility and scalable performance across large GPU-based systems while overcoming some of the difficulties posed by working directly with the Message Passing Interface (MPI) on such systems. A particular goal of this talk will be to go beyond simply describing the work performed to prepare PETSc and simulation codes that rely on it to run on exascale-class systems, but to enumerate the challenges we encountered and to share the essential lessons learned that can help other developers to prepare and optimize their high-performance scientific computing codes for the exascale era.
How to cite: Tran Mills, R., Adams, M., Balay, S., Brown, J., Faibussowitsch, J., G. Knepley, M., Kruger, S., Morgan, H., Munson, T., Rupp, K., Smith, B., Zampini, S., Zhang, H., and Zhang, J.: Enabling End-to-End Accelerated Multiphysics Simulations in the Exascale Era Using PETSc, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-65, https://doi.org/10.5194/egusphere-gc11-solidearth-65, 2023.
GC11-solidearth-15 | Poster | Session 1
Using Julia for the next generation of HPC-ready software for geodynamic modellingAlbert de Montserrat Navarro, Boris Kaus, Ludovic Räss, Ivan Utkin, and Paul Tackley
Following the long-standing paradigm in HPC, scientific software has been typically written in high-level statically typed and compiled languages, namely C/C++ and Fortran. The arguably low productivity rates of these languages led to the so-called two-language problem, where dynamic languages such as Python or MATLAB are used for prototyping purposes, before porting the algorithms to high-performance languages. The Julia programming language aims at bridging the productivity rates and other advantages of such dynamic languages without sacrificing the performance provided by their static counterparts. The combination of this high performance, productivity rates and other powerful tools, such as advanced meta-programming (i.e. code generation), make Julia a suitable candidate for the next generation of HPC-ready scientific software.
We introduce the open-source and Julia-written package JustRelax.jl (https://github.com/PTsolvers/JustRelax.jl) as a way forward for the next generation of geodynamic codes. JustRelax.jl is a production-ready API for a collection of highly-efficient numerical solvers (Stokes, diffusion, etc.) based on the embarrassingly parallel pseudo-transient method. We rely on ParallelStencil.jl (https://github.com/omlins/ParallelStencil.jl), which leverages the advanced meta-programming capabilities of Julia to generate efficient computational kernels agnostic to the back-end system (i.e. Central Processing Unit (CPU) or Graphics Processing Unit (GPU)). Using ImplicitGlobalGrid.jl (https://github.com/eth-cscs/ImplicitGlobalGrid.jl) to handle the MPI and CUDA-aware MPI communication, these computational kernels run seamlessly in local shared-memory workstations and distributed memory and multi-GPU HPC systems with little effort for the front-end user.
Efficient computation of the (local) physical properties of different materials is another critical feature required in geodynamic codes, for which we employ GeoParams.jl (https://github.com/JuliaGeodynamics/GeoParams.jl). This package provides lightweight, optimised, and reproducible computation of different material properties (e.g. advanced rheological laws, density, seismic velocity, etc.), amongst other available features. GeoParams.jl is also carefully designed to support CPU and GPU devices, and be fully compatible with other external packages, such as ParallelStencil.jl and existing auto-differentiation packages.
We finally show high-resolution GPU examples of geodynamic models based on the presented open-source Julia tools.
How to cite: de Montserrat Navarro, A., Kaus, B., Räss, L., Utkin, I., and Tackley, P.: Using Julia for the next generation of HPC-ready software for geodynamic modelling, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-15, https://doi.org/10.5194/egusphere-gc11-solidearth-15, 2023.
Following the long-standing paradigm in HPC, scientific software has been typically written in high-level statically typed and compiled languages, namely C/C++ and Fortran. The arguably low productivity rates of these languages led to the so-called two-language problem, where dynamic languages such as Python or MATLAB are used for prototyping purposes, before porting the algorithms to high-performance languages. The Julia programming language aims at bridging the productivity rates and other advantages of such dynamic languages without sacrificing the performance provided by their static counterparts. The combination of this high performance, productivity rates and other powerful tools, such as advanced meta-programming (i.e. code generation), make Julia a suitable candidate for the next generation of HPC-ready scientific software.
We introduce the open-source and Julia-written package JustRelax.jl (https://github.com/PTsolvers/JustRelax.jl) as a way forward for the next generation of geodynamic codes. JustRelax.jl is a production-ready API for a collection of highly-efficient numerical solvers (Stokes, diffusion, etc.) based on the embarrassingly parallel pseudo-transient method. We rely on ParallelStencil.jl (https://github.com/omlins/ParallelStencil.jl), which leverages the advanced meta-programming capabilities of Julia to generate efficient computational kernels agnostic to the back-end system (i.e. Central Processing Unit (CPU) or Graphics Processing Unit (GPU)). Using ImplicitGlobalGrid.jl (https://github.com/eth-cscs/ImplicitGlobalGrid.jl) to handle the MPI and CUDA-aware MPI communication, these computational kernels run seamlessly in local shared-memory workstations and distributed memory and multi-GPU HPC systems with little effort for the front-end user.
Efficient computation of the (local) physical properties of different materials is another critical feature required in geodynamic codes, for which we employ GeoParams.jl (https://github.com/JuliaGeodynamics/GeoParams.jl). This package provides lightweight, optimised, and reproducible computation of different material properties (e.g. advanced rheological laws, density, seismic velocity, etc.), amongst other available features. GeoParams.jl is also carefully designed to support CPU and GPU devices, and be fully compatible with other external packages, such as ParallelStencil.jl and existing auto-differentiation packages.
We finally show high-resolution GPU examples of geodynamic models based on the presented open-source Julia tools.
How to cite: de Montserrat Navarro, A., Kaus, B., Räss, L., Utkin, I., and Tackley, P.: Using Julia for the next generation of HPC-ready software for geodynamic modelling, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-15, https://doi.org/10.5194/egusphere-gc11-solidearth-15, 2023.
GC11-solidearth-21 | Orals | Session 1
Towards exascale-prepared codes for tsunami simulationMarc de la Asunción, Jorge Macías, and Manuel Jesús Castro
The recently finished ChEESE European project aimed at establishing a center of excellence in the domain of solid earth by developing ten flagship European codes prepared for the upcoming exascale supercomputers. The EDANYA group, at the University of Malaga, Spain, has developed two of these flagship codes: Tsunami-HySEA and Landslide-HySEA, for the simulation of tsunamis generated by earthquakes and landslides, respectively. These two codes, although being already implemented for multi-gpu architectures at the beginning of the ChEESE project, underwent substantial changes during the lifetime of the project with the objective of improving several crucial aspects such as their efficiency, scaling or input/output requirements. Specifically, we added features such as an improved load balancing, direct GPU to GPU data transfers or compressed output files, among others. Additionally, we developed a version of Tsunami-HySEA, named Monte-Carlo, particularly suited for areas such as probabilistic tsunami forecast or machine learning, capable of performing multiple simulations in parallel. In this presentation we describe all these developments carried out during the ChEESE project, along with the two audits that the two codes went through, performed by the Performance Optimisation and Productivity Centre of Excellence in HPC (POP CoE). Finally, we show some comparative results using realistic scenarios achieved at the beginning and at the end of the project.
How to cite: de la Asunción, M., Macías, J., and Castro, M. J.: Towards exascale-prepared codes for tsunami simulation, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-21, https://doi.org/10.5194/egusphere-gc11-solidearth-21, 2023.
The recently finished ChEESE European project aimed at establishing a center of excellence in the domain of solid earth by developing ten flagship European codes prepared for the upcoming exascale supercomputers. The EDANYA group, at the University of Malaga, Spain, has developed two of these flagship codes: Tsunami-HySEA and Landslide-HySEA, for the simulation of tsunamis generated by earthquakes and landslides, respectively. These two codes, although being already implemented for multi-gpu architectures at the beginning of the ChEESE project, underwent substantial changes during the lifetime of the project with the objective of improving several crucial aspects such as their efficiency, scaling or input/output requirements. Specifically, we added features such as an improved load balancing, direct GPU to GPU data transfers or compressed output files, among others. Additionally, we developed a version of Tsunami-HySEA, named Monte-Carlo, particularly suited for areas such as probabilistic tsunami forecast or machine learning, capable of performing multiple simulations in parallel. In this presentation we describe all these developments carried out during the ChEESE project, along with the two audits that the two codes went through, performed by the Performance Optimisation and Productivity Centre of Excellence in HPC (POP CoE). Finally, we show some comparative results using realistic scenarios achieved at the beginning and at the end of the project.
How to cite: de la Asunción, M., Macías, J., and Castro, M. J.: Towards exascale-prepared codes for tsunami simulation, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-21, https://doi.org/10.5194/egusphere-gc11-solidearth-21, 2023.
GC11-solidearth-37 | Poster | Session 1
A distributed-heterogeneous framework for of explicit hyperbolic solvers of shallow-water equationsRui M L Ferreira, Daniel Conde, Zixiong Zhao, and Peng Hu
This work addresses current performance limitations by introducing new distributed multi-architecture design approaches for massively parallel hyperbolic solvers. Previous successful implementations that couple MPI with either OpenMP or CUDA have been previously reported in the literature. We present novel approaches that remain intuitive and compatible with developer-centered object-oriented (OOP) paradigm but coupled with a cache-conscious data layout, compatible with both structured and unstructured meshes, promoting memory efficiency and quasi-linear scalability.
One of the approaches is based on a unified object-oriented CPU+GPU framework, augmented with an inter-device communication layer, enabling both coarse and finegrain parallelism on hyperbolic solvers. The framework is implemented through a combination of three different programming models, namely OpenMP, CUDA and MPI. The second approach is also based on a unified object-oriented CPU+GPU framework, augmented with an improved local time step algorithm (LTS) on variable updating. This framework is implemented through a combination of parallel technology (CUDA Fortran) and mathematical algorithm (TLS).
The efficiency of these distributed-heterogeneous frameworks are quantified under static and dynamic loads on consumer and professional grade CPUs and GPUs. In both approaches, an asynchronous communications scheme is implemented and described, showing very reduced overheads and a nearly linear scalability for multiple device combinations. For simulations (or systems) with non-homogeneous workloads (or devices) the domain decomposition algorithm incorporates a low-frequency load-to-device fitting function to ensure computational balance. Real-world applications to high-resolution shallow-water problems are presented. The proposed implementations show speedups of up to two orders of magnitude, opening new perspectives for shallow-water solvers with high-demand requirements.
How to cite: L Ferreira, R. M., Conde, D., Zhao, Z., and Hu, P.: A distributed-heterogeneous framework for of explicit hyperbolic solvers of shallow-water equations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-37, https://doi.org/10.5194/egusphere-gc11-solidearth-37, 2023.
This work addresses current performance limitations by introducing new distributed multi-architecture design approaches for massively parallel hyperbolic solvers. Previous successful implementations that couple MPI with either OpenMP or CUDA have been previously reported in the literature. We present novel approaches that remain intuitive and compatible with developer-centered object-oriented (OOP) paradigm but coupled with a cache-conscious data layout, compatible with both structured and unstructured meshes, promoting memory efficiency and quasi-linear scalability.
One of the approaches is based on a unified object-oriented CPU+GPU framework, augmented with an inter-device communication layer, enabling both coarse and finegrain parallelism on hyperbolic solvers. The framework is implemented through a combination of three different programming models, namely OpenMP, CUDA and MPI. The second approach is also based on a unified object-oriented CPU+GPU framework, augmented with an improved local time step algorithm (LTS) on variable updating. This framework is implemented through a combination of parallel technology (CUDA Fortran) and mathematical algorithm (TLS).
The efficiency of these distributed-heterogeneous frameworks are quantified under static and dynamic loads on consumer and professional grade CPUs and GPUs. In both approaches, an asynchronous communications scheme is implemented and described, showing very reduced overheads and a nearly linear scalability for multiple device combinations. For simulations (or systems) with non-homogeneous workloads (or devices) the domain decomposition algorithm incorporates a low-frequency load-to-device fitting function to ensure computational balance. Real-world applications to high-resolution shallow-water problems are presented. The proposed implementations show speedups of up to two orders of magnitude, opening new perspectives for shallow-water solvers with high-demand requirements.
How to cite: L Ferreira, R. M., Conde, D., Zhao, Z., and Hu, P.: A distributed-heterogeneous framework for of explicit hyperbolic solvers of shallow-water equations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-37, https://doi.org/10.5194/egusphere-gc11-solidearth-37, 2023.
GC11-solidearth-16 | Orals | Session 1
Massively parallel inverse modelling on GPUs using the adjoint methodIvan Utkin and Ludovic Räss
Continuum-based numerical modelling is a useful tool for interpreting field observations and geological or geotechical data. In order to match the available data and the results of numerical simulations, it is necessary to estimate the sensitivity of a particular model to changes in its parameters. Recent advances in hardware design, such as the development of massively parallel graphical processing units (GPUs), make it possible to run simulations at unprecedented resolution close to that of the original data. Thus, automated methods of calculating the sensitivity in a high-dimensional space of parameters are in demand.
The adjoint method of computing sensitivities, i.e., gradients of the forward model solution with respect to the parameters of interest, gains more attention in the scientific and engineering communities. This method allows for computing the sensitivities for every point of computational domain using the results of only one forward solve, in contrast to the direct method that would require runninng the forward simulation for each point of the domain. This property of adjoint method significantly reduces the amount of computational resources required for sensitivity analysis and inverse modelling.
In this work, we demonstrate the applications of the adjoint method to inverse modelling in geosciences. We developed massively parallel 3D forward and inverse solvers with full GPU support using Julia language. We present the results of performance and scalability tests on Piz Daint supercomputer at CSCS.
How to cite: Utkin, I. and Räss, L.: Massively parallel inverse modelling on GPUs using the adjoint method, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-16, https://doi.org/10.5194/egusphere-gc11-solidearth-16, 2023.
Continuum-based numerical modelling is a useful tool for interpreting field observations and geological or geotechical data. In order to match the available data and the results of numerical simulations, it is necessary to estimate the sensitivity of a particular model to changes in its parameters. Recent advances in hardware design, such as the development of massively parallel graphical processing units (GPUs), make it possible to run simulations at unprecedented resolution close to that of the original data. Thus, automated methods of calculating the sensitivity in a high-dimensional space of parameters are in demand.
The adjoint method of computing sensitivities, i.e., gradients of the forward model solution with respect to the parameters of interest, gains more attention in the scientific and engineering communities. This method allows for computing the sensitivities for every point of computational domain using the results of only one forward solve, in contrast to the direct method that would require runninng the forward simulation for each point of the domain. This property of adjoint method significantly reduces the amount of computational resources required for sensitivity analysis and inverse modelling.
In this work, we demonstrate the applications of the adjoint method to inverse modelling in geosciences. We developed massively parallel 3D forward and inverse solvers with full GPU support using Julia language. We present the results of performance and scalability tests on Piz Daint supercomputer at CSCS.
How to cite: Utkin, I. and Räss, L.: Massively parallel inverse modelling on GPUs using the adjoint method, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-16, https://doi.org/10.5194/egusphere-gc11-solidearth-16, 2023.
GC11-solidearth-38 | Poster | Session 1
Towards exascale shallow-water modelling with SERGHEI model and KokkosDaniel Caviedes-Voullième, Mario Morales-Hernández, and Ilhan Özgen-Xian
The Simulation EnviRonment for Geomorphology, Hydrodynamics and Ecohydrology in Integrated form (SERGHEI) model framework is a model framework for environmental hydrodynamics, ecohydrology, morphodynamics, and, importantly, interactions and feedbacks among such processes. SERGHEI is designed to be applicable to both geoscientific questions of coupled processes in Earth system science such as hydrological connectivity and river stability, as well as engineering applications to flooding and transport phenomena.
In this contribution we present the SERGHEI model framework, its modular concept and its performance-portable implementation. We discuss the implementation of SERGHEI including the specifics of a highly efficient parallel implementation of the numerical scheme (based on augmented Riemann solvers) and how we achieve portability using the Kokkos programming model as an abstraction layer. The experience in SERGHEI suggests that Kokkos is a robust path towards performance-portability, and sets a realistic path for SERGHEI to be ready for the upcoming European exascale systems.
We focus on the SERGHEI-SWE module which solves 2D shallow-water equation. We show that this fully operational module is performance-portable across CPUs and GPUs in several TOP500 systems, as well as first results on portability across GPU vendors. We discuss the computational performance on benchmark problems and show its scalability into the range of hundreds of scientific-grade GPUs. Additionally, we show first results of performance of the upcoming transport module in SERGHEI, and discuss the computational implications and outlook considering further integration of new modules and solvers in SERGHEI.
How to cite: Caviedes-Voullième, D., Morales-Hernández, M., and Özgen-Xian, I.: Towards exascale shallow-water modelling with SERGHEI model and Kokkos, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-38, https://doi.org/10.5194/egusphere-gc11-solidearth-38, 2023.
The Simulation EnviRonment for Geomorphology, Hydrodynamics and Ecohydrology in Integrated form (SERGHEI) model framework is a model framework for environmental hydrodynamics, ecohydrology, morphodynamics, and, importantly, interactions and feedbacks among such processes. SERGHEI is designed to be applicable to both geoscientific questions of coupled processes in Earth system science such as hydrological connectivity and river stability, as well as engineering applications to flooding and transport phenomena.
In this contribution we present the SERGHEI model framework, its modular concept and its performance-portable implementation. We discuss the implementation of SERGHEI including the specifics of a highly efficient parallel implementation of the numerical scheme (based on augmented Riemann solvers) and how we achieve portability using the Kokkos programming model as an abstraction layer. The experience in SERGHEI suggests that Kokkos is a robust path towards performance-portability, and sets a realistic path for SERGHEI to be ready for the upcoming European exascale systems.
We focus on the SERGHEI-SWE module which solves 2D shallow-water equation. We show that this fully operational module is performance-portable across CPUs and GPUs in several TOP500 systems, as well as first results on portability across GPU vendors. We discuss the computational performance on benchmark problems and show its scalability into the range of hundreds of scientific-grade GPUs. Additionally, we show first results of performance of the upcoming transport module in SERGHEI, and discuss the computational implications and outlook considering further integration of new modules and solvers in SERGHEI.
How to cite: Caviedes-Voullième, D., Morales-Hernández, M., and Özgen-Xian, I.: Towards exascale shallow-water modelling with SERGHEI model and Kokkos, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-38, https://doi.org/10.5194/egusphere-gc11-solidearth-38, 2023.
GC11-solidearth-39 | Poster | Session 1
Modelling the accumulation of magma prior to the caldera collapsePascal Aellig, Boris Kaus, Albert de Montserrat, Daniel Kiss, and Nicolas Berlie
The occurrence of large voluminous volcanic eruptions, so called caldera-forming eruptions, pose a threat to humankind and its growing cities located near volcanoes. With today's technology, the underlying processes of large scale magmatic systems can be modelled to further improve the understanding of all phases of an eruption. For those caldera-forming events, the deficit in overpressure and magma stored within the chamber results in a collapse of the ceiling and permanently alters the geomorphology of the region. The processes of magma accumulation, the resulting overpressure and fracturing of the country rock can be modelled using various dynamic magma models. In this study we take a multiphysical approach and apply an open source thermal evolution magma intrusion model and couple it with a pseudo-transient Stokes solver (PT Solver) written in Julia. The model is set up to run in parallel and work on graphics processing unit (GPU) to maximise its efficiency and applicability to the newest generation of high performance computing (HPC) machines. The coupling enables us to model the growth of the magmatic system while also accounting for different complexities in rheology. The model provides an indication on the long-term magmatic evolution, both thermal and volumetric, during the build-up stage prior to caldera-forming eruptions.
How to cite: Aellig, P., Kaus, B., de Montserrat, A., Kiss, D., and Berlie, N.: Modelling the accumulation of magma prior to the caldera collapse, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-39, https://doi.org/10.5194/egusphere-gc11-solidearth-39, 2023.
The occurrence of large voluminous volcanic eruptions, so called caldera-forming eruptions, pose a threat to humankind and its growing cities located near volcanoes. With today's technology, the underlying processes of large scale magmatic systems can be modelled to further improve the understanding of all phases of an eruption. For those caldera-forming events, the deficit in overpressure and magma stored within the chamber results in a collapse of the ceiling and permanently alters the geomorphology of the region. The processes of magma accumulation, the resulting overpressure and fracturing of the country rock can be modelled using various dynamic magma models. In this study we take a multiphysical approach and apply an open source thermal evolution magma intrusion model and couple it with a pseudo-transient Stokes solver (PT Solver) written in Julia. The model is set up to run in parallel and work on graphics processing unit (GPU) to maximise its efficiency and applicability to the newest generation of high performance computing (HPC) machines. The coupling enables us to model the growth of the magmatic system while also accounting for different complexities in rheology. The model provides an indication on the long-term magmatic evolution, both thermal and volumetric, during the build-up stage prior to caldera-forming eruptions.
How to cite: Aellig, P., Kaus, B., de Montserrat, A., Kiss, D., and Berlie, N.: Modelling the accumulation of magma prior to the caldera collapse, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-39, https://doi.org/10.5194/egusphere-gc11-solidearth-39, 2023.
GC11-solidearth-67 | Poster | Session 1
AWP-ODC: A Highly Scalable HPC Tool for Dynamic Rupture and Wave Propagation SimulationsKim Olsen, Daniel Roten, Te-Yang Yeh, Ke Xu, and Yifeng Cui
AWP-ODC is an open-source dynamic rupture and wave propagation code which solves the 3D velocity-stress wave equation explicitly by a staggered-grid finite-difference method with fourth-order accuracy in space and second-order accuracy in time. The code is memory-bandwidth, with excellent scalability up to full machine scale on CPUs and GPUs, tuned on CLX, with support for generating vector folded finite difference stencils using intrinsic functions. AWP-ODC includes frequency-dependent anelastic attenuation Q(f), small-scale media heterogeneities, support for topography, Drucker-Prager visco-plasticity, and a multi-yield-surface, hysteretic (Iwan) nonlinear model using an overlay concept. Support for a discontinuous mesh is available for increased efficiency. An important application of AWP-ODC is the CyberShake Strain-Green-Tensor (SGT) code used for probabilistic hazard analysis in CA and other regions.
Here, we summarize implementation and verification of some of the widely-used capabilities of AWP-ODC, as well as validation against strong motion data and recent applications for future earthquake scenarios. We show for a M7.8 dynamic rupture ShakeOut scenario on the southern San Andreas fault that while simulations with a single yield surface reduces long period ground motion amplitudes by about 25% inside a wave guide in greater Los Angeles, multi-surface Iwan nonlinearity further reduces the values by a factor of two. In addition, we show assembly and calibration of a 3D Community Velocity Model (CVM) for central and southern Chile as well as Peru. The CVM is validated for the 2010 M8.8 Maule, Chile, earthquake up to 5 Hz, and the validated CVM is used for scenario simulations of megathrust scenario events with magnitude up to M9.5 in the Chile-Peru subduction zone for risk assessment. Finally, we show simulations of 0-3 Hz 3D wave propagation for the 2019 Mw 7.1 Ridgecrest earthquake including a data-constrained high-resolution fault-zone model. Our results show that the heterogeneous near-fault low-velocity zone inherent to the fault zone structure significantly perturbs the predicted wave field in the near-source region, in particular by more accurately generating Love waves at its boundaries, in better agreement with observations, including at distances 200+ km in Los Angeles.
How to cite: Olsen, K., Roten, D., Yeh, T.-Y., Xu, K., and Cui, Y.: AWP-ODC: A Highly Scalable HPC Tool for Dynamic Rupture and Wave Propagation Simulations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-67, https://doi.org/10.5194/egusphere-gc11-solidearth-67, 2023.
AWP-ODC is an open-source dynamic rupture and wave propagation code which solves the 3D velocity-stress wave equation explicitly by a staggered-grid finite-difference method with fourth-order accuracy in space and second-order accuracy in time. The code is memory-bandwidth, with excellent scalability up to full machine scale on CPUs and GPUs, tuned on CLX, with support for generating vector folded finite difference stencils using intrinsic functions. AWP-ODC includes frequency-dependent anelastic attenuation Q(f), small-scale media heterogeneities, support for topography, Drucker-Prager visco-plasticity, and a multi-yield-surface, hysteretic (Iwan) nonlinear model using an overlay concept. Support for a discontinuous mesh is available for increased efficiency. An important application of AWP-ODC is the CyberShake Strain-Green-Tensor (SGT) code used for probabilistic hazard analysis in CA and other regions.
Here, we summarize implementation and verification of some of the widely-used capabilities of AWP-ODC, as well as validation against strong motion data and recent applications for future earthquake scenarios. We show for a M7.8 dynamic rupture ShakeOut scenario on the southern San Andreas fault that while simulations with a single yield surface reduces long period ground motion amplitudes by about 25% inside a wave guide in greater Los Angeles, multi-surface Iwan nonlinearity further reduces the values by a factor of two. In addition, we show assembly and calibration of a 3D Community Velocity Model (CVM) for central and southern Chile as well as Peru. The CVM is validated for the 2010 M8.8 Maule, Chile, earthquake up to 5 Hz, and the validated CVM is used for scenario simulations of megathrust scenario events with magnitude up to M9.5 in the Chile-Peru subduction zone for risk assessment. Finally, we show simulations of 0-3 Hz 3D wave propagation for the 2019 Mw 7.1 Ridgecrest earthquake including a data-constrained high-resolution fault-zone model. Our results show that the heterogeneous near-fault low-velocity zone inherent to the fault zone structure significantly perturbs the predicted wave field in the near-source region, in particular by more accurately generating Love waves at its boundaries, in better agreement with observations, including at distances 200+ km in Los Angeles.
How to cite: Olsen, K., Roten, D., Yeh, T.-Y., Xu, K., and Cui, Y.: AWP-ODC: A Highly Scalable HPC Tool for Dynamic Rupture and Wave Propagation Simulations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-67, https://doi.org/10.5194/egusphere-gc11-solidearth-67, 2023.
GC11-solidearth-69 | Poster | Session 1
Tandem: A discontinuous Galerkin method for sequences of earthquakes and aseismic slip on multiple faults using unstructured curvilinear gridsCasper Pranger, Dave A. May, and Alice-Agnes Gabriel
Seismic cycle modeling has advanced to the point where 3D models could be connected to geodetic and seismic observations to test hypotheses about controls on the depth and along-strike extent of ruptures, and interactions between seismic and aseismic slip events in geometrically complex fault systems. Such an undertaking does however require a greater degree of geometrical flexibility, material behaviors, code performance, and community participation than has so far been the standard.
In this light we present tandem, an open-source C++ code for 3D earthquake sequence modeling (Uphoff et al., 2022a). Tandem solves the elasticity problem with a discontinuous Galerkin finite element method on unstructured tetrahedral meshes. It can handle multiple, nonplanar faults and spatially variable elastic properties (Figure 1). The method can be extended to nonlinear off-fault material response (e.g., power-law viscoelasticity). The code is parallelized with MPI and uses the PETSc-TAO library (Balay et al., 2022) for time-integrators and preconditioned Krylov methods to solve the static elasticity problem. Faults are governed by rate- state friction and adaptive time-stepping permits modeling of dynamic rupture, the postseismic period, and interseismic loading, all across multiple earthquake cycles. The code is developed with best practices for open-source community software and includes documentation and tutorials to facilitate use by the community (github.com/TEAR-ERC/tandem).
Uphoff, C., D. A. May, and A.-A. Gabriel (2022a), A discontinuous Galerkin method for sequences of earthquakes and aseismic slip on multiple faults using unstructured curvilinear grids, Geophysical Journal International.
Balay, S., et al. (2022), PETSc/TAO users manual, Tech. rep., Argonne National Lab., Argonne, IL, USA (petsc.org/).
How to cite: Pranger, C., May, D. A., and Gabriel, A.-A.: Tandem: A discontinuous Galerkin method for sequences of earthquakes and aseismic slip on multiple faults using unstructured curvilinear grids, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-69, https://doi.org/10.5194/egusphere-gc11-solidearth-69, 2023.
Seismic cycle modeling has advanced to the point where 3D models could be connected to geodetic and seismic observations to test hypotheses about controls on the depth and along-strike extent of ruptures, and interactions between seismic and aseismic slip events in geometrically complex fault systems. Such an undertaking does however require a greater degree of geometrical flexibility, material behaviors, code performance, and community participation than has so far been the standard.
In this light we present tandem, an open-source C++ code for 3D earthquake sequence modeling (Uphoff et al., 2022a). Tandem solves the elasticity problem with a discontinuous Galerkin finite element method on unstructured tetrahedral meshes. It can handle multiple, nonplanar faults and spatially variable elastic properties (Figure 1). The method can be extended to nonlinear off-fault material response (e.g., power-law viscoelasticity). The code is parallelized with MPI and uses the PETSc-TAO library (Balay et al., 2022) for time-integrators and preconditioned Krylov methods to solve the static elasticity problem. Faults are governed by rate- state friction and adaptive time-stepping permits modeling of dynamic rupture, the postseismic period, and interseismic loading, all across multiple earthquake cycles. The code is developed with best practices for open-source community software and includes documentation and tutorials to facilitate use by the community (github.com/TEAR-ERC/tandem).
Uphoff, C., D. A. May, and A.-A. Gabriel (2022a), A discontinuous Galerkin method for sequences of earthquakes and aseismic slip on multiple faults using unstructured curvilinear grids, Geophysical Journal International.
Balay, S., et al. (2022), PETSc/TAO users manual, Tech. rep., Argonne National Lab., Argonne, IL, USA (petsc.org/).
How to cite: Pranger, C., May, D. A., and Gabriel, A.-A.: Tandem: A discontinuous Galerkin method for sequences of earthquakes and aseismic slip on multiple faults using unstructured curvilinear grids, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-69, https://doi.org/10.5194/egusphere-gc11-solidearth-69, 2023.
Session 2 – Edge-to-end data workflows
GC11-solidearth-53 | Orals | Session 2
Preparing Seismic Applications for Exascale Using Scientific WorkflowsScott Callaghan, Philip Maechling, Karan Vahi, Ewa Deelman, Fabio Silva, Kevin Milner, Kim Olsen, Robert Graves, Thomas Jordan, and Yehuda Ben-Zion
Scientific workflows are key to supporting the execution of large-scale simulations in many scientific domains, including solid earth geophysics. Although many different workflow tools exist, they share common features, enabling application developers to express their simulations as a series of linked software elements with data dependencies and then execute the workflow efficiently on distributed resources.
To illustrate the use and benefits of scientific workflows in seismic applications, this talk will describe CyberShake, a probabilistic seismic hazard analysis (PSHA) platform developed by the Southern California Earthquake Center (SCEC). CyberShake uses 3D physics-based wave propagation simulations with reciprocity to calculate ground motions for events from an earthquake rupture forecast (ERF). Typically, CyberShake considers over 500,000 events per site of interest, and then combines the individual ground motions with probabilities from the ERF to produce site-specific PSHA curves. CyberShake has integrated modules from another SCEC workflow application, the Broadband Platform (BBP), enabling CyberShake simulations to include both low-frequency deterministic and high-frequency stochastic content. This talk will discuss the workflow framework that CyberShake utilizes to support campaigns requiring hundreds of thousands of node-hours over months of wall clock time, and the lessons learned through 15 years of CyberShake simulations.
This talk will also reflect on the growth and development of workflow-based simulations and explore the challenges faced by applications in the exascale era, such as managing massive volumes of data, taking full advantage of exascale systems, and the emergence of AI-informed simulations. The talk will discuss ways in which workflow technologies may help mitigate these challenges as we move our science forward.
How to cite: Callaghan, S., Maechling, P., Vahi, K., Deelman, E., Silva, F., Milner, K., Olsen, K., Graves, R., Jordan, T., and Ben-Zion, Y.: Preparing Seismic Applications for Exascale Using Scientific Workflows, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-53, https://doi.org/10.5194/egusphere-gc11-solidearth-53, 2023.
Scientific workflows are key to supporting the execution of large-scale simulations in many scientific domains, including solid earth geophysics. Although many different workflow tools exist, they share common features, enabling application developers to express their simulations as a series of linked software elements with data dependencies and then execute the workflow efficiently on distributed resources.
To illustrate the use and benefits of scientific workflows in seismic applications, this talk will describe CyberShake, a probabilistic seismic hazard analysis (PSHA) platform developed by the Southern California Earthquake Center (SCEC). CyberShake uses 3D physics-based wave propagation simulations with reciprocity to calculate ground motions for events from an earthquake rupture forecast (ERF). Typically, CyberShake considers over 500,000 events per site of interest, and then combines the individual ground motions with probabilities from the ERF to produce site-specific PSHA curves. CyberShake has integrated modules from another SCEC workflow application, the Broadband Platform (BBP), enabling CyberShake simulations to include both low-frequency deterministic and high-frequency stochastic content. This talk will discuss the workflow framework that CyberShake utilizes to support campaigns requiring hundreds of thousands of node-hours over months of wall clock time, and the lessons learned through 15 years of CyberShake simulations.
This talk will also reflect on the growth and development of workflow-based simulations and explore the challenges faced by applications in the exascale era, such as managing massive volumes of data, taking full advantage of exascale systems, and the emergence of AI-informed simulations. The talk will discuss ways in which workflow technologies may help mitigate these challenges as we move our science forward.
How to cite: Callaghan, S., Maechling, P., Vahi, K., Deelman, E., Silva, F., Milner, K., Olsen, K., Graves, R., Jordan, T., and Ben-Zion, Y.: Preparing Seismic Applications for Exascale Using Scientific Workflows, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-53, https://doi.org/10.5194/egusphere-gc11-solidearth-53, 2023.
GC11-solidearth-4 | Poster | Session 2
Alerting technologies to save lives in Forest Fires are effective with technology redundancies and multilingual CAP Event Terms basis. Design for worldwide consumer electronics adoption is preferable to reduce costs.Frank Bell
California wild and forest fires in 2017 resulted in over 100 fatalities. While WEA alerts were transmitted to mobiles in selected areas, the power and network outages limited their delivery. WEA is similar to the SMS Broadcast system used elsewhere. It does not require subscription, and can be geotargeted, usually by map polygon. There are already available and in development other alerting technologies. The Emergency Alert System on radio and TV in the U.S. has been in use for many years. It is a broadcast break-in system the overrides program content. This was used in one location for the wildfires, but not elsewhere as geotargeting is not possible with this system. It is and analog broadcast technology architecture. AM and FM Broadcast in the U.S. now has HD Radio that is mixed analog and digital. A limited data message can be carried and used for selective delivery of messages. DAB, DAB+ and DRM also can carry a message payload, which can be used for a selective delivery mechanism when the receiver has location position. This may be in a vehicle radio/navigation system. The current digital television system in the U.S. and some other countries is now being replaced by ATSC 3.0. This provided a superior modulation format, Layered Division Multiplexing (LDM) for delivery of program content and alerts to suitable mobiles. An IC for UHF reception and prototype mobiles have been developed. No external antenna is required. Bothe of these new technologies are tested as delivering alerts independently of the mobile network. Within the limitations of radio and TV propagation, such capabilities would provide technology redundancy. The television signal propagation may be limited in rural areas, but ATSC 3.0 is capable of having on frequency repeaters to make a single frequency network for improved coverage of program content and alerting. Multilingual alerts based on the CAP Event Terms list with Message Formats are being provided for.
How to cite: Bell, F.: Alerting technologies to save lives in Forest Fires are effective with technology redundancies and multilingual CAP Event Terms basis. Design for worldwide consumer electronics adoption is preferable to reduce costs., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-4, https://doi.org/10.5194/egusphere-gc11-solidearth-4, 2023.
California wild and forest fires in 2017 resulted in over 100 fatalities. While WEA alerts were transmitted to mobiles in selected areas, the power and network outages limited their delivery. WEA is similar to the SMS Broadcast system used elsewhere. It does not require subscription, and can be geotargeted, usually by map polygon. There are already available and in development other alerting technologies. The Emergency Alert System on radio and TV in the U.S. has been in use for many years. It is a broadcast break-in system the overrides program content. This was used in one location for the wildfires, but not elsewhere as geotargeting is not possible with this system. It is and analog broadcast technology architecture. AM and FM Broadcast in the U.S. now has HD Radio that is mixed analog and digital. A limited data message can be carried and used for selective delivery of messages. DAB, DAB+ and DRM also can carry a message payload, which can be used for a selective delivery mechanism when the receiver has location position. This may be in a vehicle radio/navigation system. The current digital television system in the U.S. and some other countries is now being replaced by ATSC 3.0. This provided a superior modulation format, Layered Division Multiplexing (LDM) for delivery of program content and alerts to suitable mobiles. An IC for UHF reception and prototype mobiles have been developed. No external antenna is required. Bothe of these new technologies are tested as delivering alerts independently of the mobile network. Within the limitations of radio and TV propagation, such capabilities would provide technology redundancy. The television signal propagation may be limited in rural areas, but ATSC 3.0 is capable of having on frequency repeaters to make a single frequency network for improved coverage of program content and alerting. Multilingual alerts based on the CAP Event Terms list with Message Formats are being provided for.
How to cite: Bell, F.: Alerting technologies to save lives in Forest Fires are effective with technology redundancies and multilingual CAP Event Terms basis. Design for worldwide consumer electronics adoption is preferable to reduce costs., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-4, https://doi.org/10.5194/egusphere-gc11-solidearth-4, 2023.
GC11-solidearth-8 | Poster | Session 2
The Collaborative Seismic Earth Model: Generation 2Sebastian Noe, Dirk-Philip van Herwaarden, Sölvi Thrastarson, and Andreas Fichtner
We present the second generation of the Collaborative Seismic Earth Model (CSEM), a multi-scale global tomographic Earth model that continuously evolves via successive regional and global-scale refinements. Given finite computational resources, a systematic community effort enables the Earth model construction within the CSEM-architecture. It thereby takes advantage of the distributed human and computing power within the seismological community. The basic update methodology utilizes the current version of the CSEM as the initial model for regional tomographies. This setup allows to consistently incorporate previously accumulated knowledge into each new iteration of the CSEM. The latest generation of the CSEM includes 21 regional refinements from full seismic waveform inversion, ranging from several tens of kilometers to the entire globe. Some noticeable changes since the first generation include detailed local waveform inversions for the Central Andes, Iran, South-east Asia and the Western United States, continental-scale refinements for Africa and Asia and a global long-period tomography in areas that are not included in any of the submodels. Across all regional refinements in the current CSEM, three-component waveform data from 1,637 events and over 700,000 unique source-receiver pairs are utilized to resolve subsurface structure. Minimum periods of models range between 8 and 55 seconds. Using this model as a starting point, a global full-waveform inversion over multiple period bands down to periods of 50 seconds is deployed to ensure that the regional updates predict waveforms and that whole-Earth structure is honored. In this contribution, we will present the CSEM updating scheme and its parameterization, as well as the current state of the model. We show that the model predicts seismic waveforms on global and regional scales. Active participation in the project is encouraged.
How to cite: Noe, S., van Herwaarden, D.-P., Thrastarson, S., and Fichtner, A.: The Collaborative Seismic Earth Model: Generation 2, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-8, https://doi.org/10.5194/egusphere-gc11-solidearth-8, 2023.
We present the second generation of the Collaborative Seismic Earth Model (CSEM), a multi-scale global tomographic Earth model that continuously evolves via successive regional and global-scale refinements. Given finite computational resources, a systematic community effort enables the Earth model construction within the CSEM-architecture. It thereby takes advantage of the distributed human and computing power within the seismological community. The basic update methodology utilizes the current version of the CSEM as the initial model for regional tomographies. This setup allows to consistently incorporate previously accumulated knowledge into each new iteration of the CSEM. The latest generation of the CSEM includes 21 regional refinements from full seismic waveform inversion, ranging from several tens of kilometers to the entire globe. Some noticeable changes since the first generation include detailed local waveform inversions for the Central Andes, Iran, South-east Asia and the Western United States, continental-scale refinements for Africa and Asia and a global long-period tomography in areas that are not included in any of the submodels. Across all regional refinements in the current CSEM, three-component waveform data from 1,637 events and over 700,000 unique source-receiver pairs are utilized to resolve subsurface structure. Minimum periods of models range between 8 and 55 seconds. Using this model as a starting point, a global full-waveform inversion over multiple period bands down to periods of 50 seconds is deployed to ensure that the regional updates predict waveforms and that whole-Earth structure is honored. In this contribution, we will present the CSEM updating scheme and its parameterization, as well as the current state of the model. We show that the model predicts seismic waveforms on global and regional scales. Active participation in the project is encouraged.
How to cite: Noe, S., van Herwaarden, D.-P., Thrastarson, S., and Fichtner, A.: The Collaborative Seismic Earth Model: Generation 2, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-8, https://doi.org/10.5194/egusphere-gc11-solidearth-8, 2023.
GC11-solidearth-7 | Orals | Session 2
BackTrackBB workflow for seismic source detection and location with PyCOMPSs parallel computational frameworkNatalia Poiata, Javier Conejero, Rosa M. Badia, and Jean-Pierre Vilotte
In this work we present a scalable parallelization with PyCOMPSs (Tejedor et al., 2017; the Python binding of COMPSs) of the Python-based workflow BackTrackBB (Poiata et al., 2016) for the automatic detection and location of seismic sources using continuous waveform data recorded by regular to large seismic networks. PyCOMPSs is a task-based programming model for Python applications that relies in a powerful runtime able to extract dynamically the parallelism among tasks and execute them in distributed environments (e.g., HPC Clusters, Cloud infrastructures, etc.) transparently to the users. BackTrackBB with PyCOMPSs implementation allows to fully parallelize the seismic source detection and location process making it efficient and portable in terms of the use of available HPC resources.
We provide details of the BackTrackBB workflow implementation with PyCOMPSs and discuss its performance by presenting the results of the scalability tests and memory usage analysis. All the tests have been performed on the MareNostrum4 High-Performance computer of the Barcelona Supercomputing Centre. The first version of the BackTrackBB with PyCOMPSs workflow was developed in the context of the European Centre Of Excellence (CoE) ChEESE for Exascale computing in solid earth sciences. The initial workflow developments and performance tests made use of a simplified synthetic dataset emulating a large-scale seismic network deployment in a seismically active area and corresponding to 100 vertical sensors recording a month of continuous waveforms at a sampling rate of 100 sps. In the following testing step, the workflow was applied to the real-case two-month long dataset from Vrancea seismic region in Romania (corresponding to the 60-190 km deep earthquakes activity). Real seismic data scenario proved to present some challenges in terms of the data-quality control, that often occurs in the case of continuous waveforms recorded by the seismic observatories. This issue have been resolved and corresponding modifications were included in the final version of BackTrackBB with PyCOMPSs. The real dataset tests showed that the workflow allows improved detection and location of seismic events through the efficient processing of the large continuous seismic data with important performance and scalability improvements.
We show that BackTrackBB with PyCOMPSs workflow enables generation of fully reproducible, seismic catalogues (or seismic catalogues realizations) through the analysis of the continuous large (in terms of the number of seismic stations, data record length and covered area) seismic data-sets. Such implementations making use of advances full-waveform detection and location methods are currently highly-challenging or, some-times, impossible due to the amount of required main memory or unfeasible time to solution. PyCOMPSs has demonstrated to be able to deal with both issues successfully allowing to explore in greater depth the usage with BackTrackBB method. Workflows such as BackTrackBB with PyCOMPSs has the ability to significantly improve the detections and location process that is currently in place at seismological observatories or network operation centres, providing fully reproducing detailed catalogues in the seismically-active regions and allowing multiple input parameters testing (e.g., station configuration, velocity models).
How to cite: Poiata, N., Conejero, J., Badia, R. M., and Vilotte, J.-P.: BackTrackBB workflow for seismic source detection and location with PyCOMPSs parallel computational framework, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-7, https://doi.org/10.5194/egusphere-gc11-solidearth-7, 2023.
In this work we present a scalable parallelization with PyCOMPSs (Tejedor et al., 2017; the Python binding of COMPSs) of the Python-based workflow BackTrackBB (Poiata et al., 2016) for the automatic detection and location of seismic sources using continuous waveform data recorded by regular to large seismic networks. PyCOMPSs is a task-based programming model for Python applications that relies in a powerful runtime able to extract dynamically the parallelism among tasks and execute them in distributed environments (e.g., HPC Clusters, Cloud infrastructures, etc.) transparently to the users. BackTrackBB with PyCOMPSs implementation allows to fully parallelize the seismic source detection and location process making it efficient and portable in terms of the use of available HPC resources.
We provide details of the BackTrackBB workflow implementation with PyCOMPSs and discuss its performance by presenting the results of the scalability tests and memory usage analysis. All the tests have been performed on the MareNostrum4 High-Performance computer of the Barcelona Supercomputing Centre. The first version of the BackTrackBB with PyCOMPSs workflow was developed in the context of the European Centre Of Excellence (CoE) ChEESE for Exascale computing in solid earth sciences. The initial workflow developments and performance tests made use of a simplified synthetic dataset emulating a large-scale seismic network deployment in a seismically active area and corresponding to 100 vertical sensors recording a month of continuous waveforms at a sampling rate of 100 sps. In the following testing step, the workflow was applied to the real-case two-month long dataset from Vrancea seismic region in Romania (corresponding to the 60-190 km deep earthquakes activity). Real seismic data scenario proved to present some challenges in terms of the data-quality control, that often occurs in the case of continuous waveforms recorded by the seismic observatories. This issue have been resolved and corresponding modifications were included in the final version of BackTrackBB with PyCOMPSs. The real dataset tests showed that the workflow allows improved detection and location of seismic events through the efficient processing of the large continuous seismic data with important performance and scalability improvements.
We show that BackTrackBB with PyCOMPSs workflow enables generation of fully reproducible, seismic catalogues (or seismic catalogues realizations) through the analysis of the continuous large (in terms of the number of seismic stations, data record length and covered area) seismic data-sets. Such implementations making use of advances full-waveform detection and location methods are currently highly-challenging or, some-times, impossible due to the amount of required main memory or unfeasible time to solution. PyCOMPSs has demonstrated to be able to deal with both issues successfully allowing to explore in greater depth the usage with BackTrackBB method. Workflows such as BackTrackBB with PyCOMPSs has the ability to significantly improve the detections and location process that is currently in place at seismological observatories or network operation centres, providing fully reproducing detailed catalogues in the seismically-active regions and allowing multiple input parameters testing (e.g., station configuration, velocity models).
How to cite: Poiata, N., Conejero, J., Badia, R. M., and Vilotte, J.-P.: BackTrackBB workflow for seismic source detection and location with PyCOMPSs parallel computational framework, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-7, https://doi.org/10.5194/egusphere-gc11-solidearth-7, 2023.
GC11-solidearth-10 | Orals | Session 2
ML Emulation of High Resolution Inundation Maps for Probabilistic Tsunami Hazard AnalysisSteven Gibbons, Erlend Storrøsten, and Finn Løvholt
Local Probabilistic Tsunami Hazard Analysis (PTHA) aims to quantify the likelihood of a given metric of tsunami inundation at a given location over a given time interval. Seismic PTHA can require the simulation of thousands to tens of thousands of earthquake scenarios and can become computationally intractable when inundation over high-resolution grids is required. The numerical tsunami simulations write out time-series at offshore locations to simulate the wave height that would be recorded on tide gauges at selected locations. The offshore time-series can be calculated at a fraction of the cost of the full inundation calculations. For a stretch of the coast of Eastern Sicily, we explore the extent to which a machine learning procedure trained on a small fraction of the total number of scenarios can predict the inundation map associated with a given offshore time-series. We exploit a set of over 30000 numerical tsunami simulations to train and evaluate the ML-procedure. The ML-based inundation predictions for locations close to the water's edge, which are flooded in many of the scenarios, show excellent correspondence with the numerical simulation results. Predicting inundation at locations further inland, which are flooded in only a small number of the simulations, is more challenging. Mitigating this shortcoming is a priority in the ongoing study.
How to cite: Gibbons, S., Storrøsten, E., and Løvholt, F.: ML Emulation of High Resolution Inundation Maps for Probabilistic Tsunami Hazard Analysis, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-10, https://doi.org/10.5194/egusphere-gc11-solidearth-10, 2023.
Local Probabilistic Tsunami Hazard Analysis (PTHA) aims to quantify the likelihood of a given metric of tsunami inundation at a given location over a given time interval. Seismic PTHA can require the simulation of thousands to tens of thousands of earthquake scenarios and can become computationally intractable when inundation over high-resolution grids is required. The numerical tsunami simulations write out time-series at offshore locations to simulate the wave height that would be recorded on tide gauges at selected locations. The offshore time-series can be calculated at a fraction of the cost of the full inundation calculations. For a stretch of the coast of Eastern Sicily, we explore the extent to which a machine learning procedure trained on a small fraction of the total number of scenarios can predict the inundation map associated with a given offshore time-series. We exploit a set of over 30000 numerical tsunami simulations to train and evaluate the ML-procedure. The ML-based inundation predictions for locations close to the water's edge, which are flooded in many of the scenarios, show excellent correspondence with the numerical simulation results. Predicting inundation at locations further inland, which are flooded in only a small number of the simulations, is more challenging. Mitigating this shortcoming is a priority in the ongoing study.
How to cite: Gibbons, S., Storrøsten, E., and Løvholt, F.: ML Emulation of High Resolution Inundation Maps for Probabilistic Tsunami Hazard Analysis, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-10, https://doi.org/10.5194/egusphere-gc11-solidearth-10, 2023.
GC11-solidearth-31 | Poster | Session 2
Volcanic ash dispersal and deposition workflow on HPCAlejandra Guerrero, Arnau Folch, and Leonardo Mingari
DT-GEO is a project proposed to deal with natural or anthropogenically induced geohazards (earthquakes, volcanoes, landslides and tsunamis) by deploying a Digital Twin of the planet. The prototype will provide a way to visualize, manipulate and understand the response to hypothetical or on-going events by integrating data acquisition and models.
Due to the complexity of the development, the project has been divided into different work packages and components. The volcanic phenomena package includes 4 Digital Twin Components (DTCs): volcanic unrest, volcanic ash clouds and ground accumulations, lava flows, and volcanic gas dispersal. The volcanic ash and dispersal deposition component implements a workflow for atmospheric dispersal and ground deposition forecast systems. The workflow is composed of four general units. The first one is the Numerical Weather Prediction (NWP) acquisition (provided by external institutions) refers to both: automatic obtention of the forecast (up to few days ahead) or the reanalysis (preprocess data from the past) in global or regional scales at different resolutions. Then, the Triggering and Eruption Source Parameters (ESP) is based on predefined communications channels and prioritized by an accuracy rank. The FALL3D model setup and run ensemble simulations, resulting from perturbing ESP values within a range. Finally, the postprocess refers to the compilation of the simulations into hazard maps.
How to cite: Guerrero, A., Folch, A., and Mingari, L.: Volcanic ash dispersal and deposition workflow on HPC, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-31, https://doi.org/10.5194/egusphere-gc11-solidearth-31, 2023.
DT-GEO is a project proposed to deal with natural or anthropogenically induced geohazards (earthquakes, volcanoes, landslides and tsunamis) by deploying a Digital Twin of the planet. The prototype will provide a way to visualize, manipulate and understand the response to hypothetical or on-going events by integrating data acquisition and models.
Due to the complexity of the development, the project has been divided into different work packages and components. The volcanic phenomena package includes 4 Digital Twin Components (DTCs): volcanic unrest, volcanic ash clouds and ground accumulations, lava flows, and volcanic gas dispersal. The volcanic ash and dispersal deposition component implements a workflow for atmospheric dispersal and ground deposition forecast systems. The workflow is composed of four general units. The first one is the Numerical Weather Prediction (NWP) acquisition (provided by external institutions) refers to both: automatic obtention of the forecast (up to few days ahead) or the reanalysis (preprocess data from the past) in global or regional scales at different resolutions. Then, the Triggering and Eruption Source Parameters (ESP) is based on predefined communications channels and prioritized by an accuracy rank. The FALL3D model setup and run ensemble simulations, resulting from perturbing ESP values within a range. Finally, the postprocess refers to the compilation of the simulations into hazard maps.
How to cite: Guerrero, A., Folch, A., and Mingari, L.: Volcanic ash dispersal and deposition workflow on HPC, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-31, https://doi.org/10.5194/egusphere-gc11-solidearth-31, 2023.
GC11-solidearth-12 | Orals | Session 2
Improving Probabilistic Gas Hazard Assessment through HPC: Unveiling VIGIL-2.0, an automatic Python workflow for probabilistic gas dispersion modellingSilvia Massaro, Fabio Dioguardi, Alejandra Guerrero, Antonio Costa, Arnau Folch, Roberto Sulpizio, Giovanni Macedonio, and Leonardo Mingari
The atmospheric dispersion of gases (of natural or industrial origins) can be very hazardous to life and the environment if the concentration of some gas species overcome specie-specific thresholds. In this context, the natural variability associated to the natural phenomena has to be explored to provide robust probabilistic gas dispersion hazard assessments.
VIGIL-1.3 (automatic probabilistic VolcanIc Gas dIspersion modeLling) is a Python simulation tool born to automatize the complex and time-consuming simulation workflow required to process a large number of gas dispersion numerical simulations. It is interfaced with two models: a dilute (DISGAS) and a dense gas (TWODEE-2) dispersion model. The former is used when the density of the gas plume at the source is lower than the atmospheric density (e.g. fumaroles), the latter when the gas density is higher than the atmosphere and the gas accumulates on the ground and may flow due to the density contrast with the atmosphere to form a gravity current (e.g. cold CO2 flows).
In the enhancement of the code towards a higher-scale computing, here we present the ongoing improvements aimed to extend some code functionalities such as memory management, modularity revision, and full-ensemble uncertainty on gas dispersal scenarios (e.g. sampling techniques for gas fluxes and source locations).
Optimizations are also provided in terms of tracking errors, redesignation of the input file, validation of data provided by the users, and addition of the Latin hypercube sampling (LHS) for the post-processing of model outputs.
All these new features will be issued in the future release of the code (VIGIL-2.0) in order to facilitate the users which could run VIGIL on laptops or large supercomputer, and to widen the spectrum of model applications from routinely operational forecast of volcanic gas to long-term hazard and/or risk assessments purposes.
How to cite: Massaro, S., Dioguardi, F., Guerrero, A., Costa, A., Folch, A., Sulpizio, R., Macedonio, G., and Mingari, L.: Improving Probabilistic Gas Hazard Assessment through HPC: Unveiling VIGIL-2.0, an automatic Python workflow for probabilistic gas dispersion modelling, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-12, https://doi.org/10.5194/egusphere-gc11-solidearth-12, 2023.
The atmospheric dispersion of gases (of natural or industrial origins) can be very hazardous to life and the environment if the concentration of some gas species overcome specie-specific thresholds. In this context, the natural variability associated to the natural phenomena has to be explored to provide robust probabilistic gas dispersion hazard assessments.
VIGIL-1.3 (automatic probabilistic VolcanIc Gas dIspersion modeLling) is a Python simulation tool born to automatize the complex and time-consuming simulation workflow required to process a large number of gas dispersion numerical simulations. It is interfaced with two models: a dilute (DISGAS) and a dense gas (TWODEE-2) dispersion model. The former is used when the density of the gas plume at the source is lower than the atmospheric density (e.g. fumaroles), the latter when the gas density is higher than the atmosphere and the gas accumulates on the ground and may flow due to the density contrast with the atmosphere to form a gravity current (e.g. cold CO2 flows).
In the enhancement of the code towards a higher-scale computing, here we present the ongoing improvements aimed to extend some code functionalities such as memory management, modularity revision, and full-ensemble uncertainty on gas dispersal scenarios (e.g. sampling techniques for gas fluxes and source locations).
Optimizations are also provided in terms of tracking errors, redesignation of the input file, validation of data provided by the users, and addition of the Latin hypercube sampling (LHS) for the post-processing of model outputs.
All these new features will be issued in the future release of the code (VIGIL-2.0) in order to facilitate the users which could run VIGIL on laptops or large supercomputer, and to widen the spectrum of model applications from routinely operational forecast of volcanic gas to long-term hazard and/or risk assessments purposes.
How to cite: Massaro, S., Dioguardi, F., Guerrero, A., Costa, A., Folch, A., Sulpizio, R., Macedonio, G., and Mingari, L.: Improving Probabilistic Gas Hazard Assessment through HPC: Unveiling VIGIL-2.0, an automatic Python workflow for probabilistic gas dispersion modelling, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-12, https://doi.org/10.5194/egusphere-gc11-solidearth-12, 2023.
GC11-solidearth-50 | Poster | Session 2
Machine Learning based Estimator for ground Shaking mapsMarisol Monterrubio-Velasco, David Modesto, Scott Callaghan, and Josep de la Puente
Large earthquakes are among the most destructive natural phenomena. After a large-magnitude event occurs, a crucial task for hazard assessment is to rapidly and accurately estimate the ground shaking intensities in the affected region. To satisfy real-time constraints, ground shaking is traditionally evaluated with empirical relations called Ground Motion Prediction Equations (GMPE) which can be combined with local amplification factors and early data recordings, when available. Given their nature, GMPEs can be inaccurate to model rarely observed earthquakes, such as large earthquakes. Furthermore, even for very populated databases, GMPEs are characterized by large variances, as earthquakes of similar magnitude and location may have very different outcomes related to complex fault phenomena and wave physics.
The ML Estimator for Ground Shaking maps (MLESmap) workflow is proposed as a novel procedure that exploits the predictive power of ML algorithms to estimate ground acceleration values a few seconds after a large earthquake occurs. The inferred model can produce peak (spectral) ground motion maps for quasi-real-time applications. Due to its fast assessment, it can further be used to explore uncertainties quickly and reliably. MLESmap is based upon large databases of physics-based seismic scenarios to feed the algorithms.
Our approach (i.e. simulate, train, deploy) can help produce the next generation of ground shake maps, capturing physical information from wave propagation (directivity, topography, site effects) at the velocity of simple empirical GMPE. In this work, we will present the MLESmap workflow, its precision, and a use case.
How to cite: Monterrubio-Velasco, M., Modesto, D., Callaghan, S., and de la Puente, J.: Machine Learning based Estimator for ground Shaking maps, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-50, https://doi.org/10.5194/egusphere-gc11-solidearth-50, 2023.
Large earthquakes are among the most destructive natural phenomena. After a large-magnitude event occurs, a crucial task for hazard assessment is to rapidly and accurately estimate the ground shaking intensities in the affected region. To satisfy real-time constraints, ground shaking is traditionally evaluated with empirical relations called Ground Motion Prediction Equations (GMPE) which can be combined with local amplification factors and early data recordings, when available. Given their nature, GMPEs can be inaccurate to model rarely observed earthquakes, such as large earthquakes. Furthermore, even for very populated databases, GMPEs are characterized by large variances, as earthquakes of similar magnitude and location may have very different outcomes related to complex fault phenomena and wave physics.
The ML Estimator for Ground Shaking maps (MLESmap) workflow is proposed as a novel procedure that exploits the predictive power of ML algorithms to estimate ground acceleration values a few seconds after a large earthquake occurs. The inferred model can produce peak (spectral) ground motion maps for quasi-real-time applications. Due to its fast assessment, it can further be used to explore uncertainties quickly and reliably. MLESmap is based upon large databases of physics-based seismic scenarios to feed the algorithms.
Our approach (i.e. simulate, train, deploy) can help produce the next generation of ground shake maps, capturing physical information from wave propagation (directivity, topography, site effects) at the velocity of simple empirical GMPE. In this work, we will present the MLESmap workflow, its precision, and a use case.
How to cite: Monterrubio-Velasco, M., Modesto, D., Callaghan, S., and de la Puente, J.: Machine Learning based Estimator for ground Shaking maps, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-50, https://doi.org/10.5194/egusphere-gc11-solidearth-50, 2023.
GC11-solidearth-41 | Poster | Session 2
A new Near-Fault Earthquake Ground Motion Model for Iceland from Bayesian Hierarchical ModelingFarnaz Bayat, Milad Kowsari, Otilio Rojas, Marisol Monterrubio-Velasco, Josep de la Puente, and Benedikt Halldorsson
The strongest earthquakes in Southwest Iceland take place on a large number of North-South near-vertical dextral strike-slip faults located side-by-side along the entire zone. The capital region along with multiple small towns are in close proximity or on top of this fault system, along with all infrastructure and lifelines of our modern society. As a result, seismic hazard is the highest in this region and performing a probabilistic seismic hazard assessment (PSHA) as the most used procedure to reduce the ruinous effects of large earthquakes is vital. A reliable PSHA requires a reliable ground motion models (GMMs) that can appropriately describe the ground shaking at any given location. However, past PSHA efforts in Iceland did not account for the complex near-fault effects in the form of long-period, high-amplitude velocity pulses that are the most damaging feature of ground motions in the near-fault region. Recently, a new 3D finite-fault system model of the entire bookshelf zone has been proposed for Southwest Iceland. The model has been balanced against the rate of the tectonic plate motions and its seismic activity has been shown to be variable along the entire zone. Given the unknown fault locations, the model allows both for deterministic and random fault locations, and each fault is completely specified in terms of its maximum magnitude, its dimensions and its long-term slip and moment rate. In collaboration with ChEESE project, a realization of a 3000-year finite-fault earthquake catalogue based on the 3D finite-fault system model has been implemented in the CyberShake platform and the ground motion of each earthquake have been simulated for a dense grid of 594 stations. The simulation has been carried out on high-performance computing systems of the Barcelona Supercomputing Centre in Spain. The variation of hypocentral locations and slip distribution on each finite-fault has produced 18 million event-station pairs of synthetic two-horizontal-component low-frequency ground motion time histories that have just become available, those that are simulated less than 40 km from the faults contain near-fault high-amplitude velocity pulses at larger magnitudes, where actual data is nonexistent in Iceland (i.e., above 6.5). Therefore, the purpose of this study is to use a new and vast near-fault dataset of synthetic ground motions to develop a near-fault GMM using an advanced Bayesian Hierarchical Modeling (BHM) for Southwest Iceland.
How to cite: Bayat, F., Kowsari, M., Rojas, O., Monterrubio-Velasco, M., de la Puente, J., and Halldorsson, B.: A new Near-Fault Earthquake Ground Motion Model for Iceland from Bayesian Hierarchical Modeling , Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-41, https://doi.org/10.5194/egusphere-gc11-solidearth-41, 2023.
The strongest earthquakes in Southwest Iceland take place on a large number of North-South near-vertical dextral strike-slip faults located side-by-side along the entire zone. The capital region along with multiple small towns are in close proximity or on top of this fault system, along with all infrastructure and lifelines of our modern society. As a result, seismic hazard is the highest in this region and performing a probabilistic seismic hazard assessment (PSHA) as the most used procedure to reduce the ruinous effects of large earthquakes is vital. A reliable PSHA requires a reliable ground motion models (GMMs) that can appropriately describe the ground shaking at any given location. However, past PSHA efforts in Iceland did not account for the complex near-fault effects in the form of long-period, high-amplitude velocity pulses that are the most damaging feature of ground motions in the near-fault region. Recently, a new 3D finite-fault system model of the entire bookshelf zone has been proposed for Southwest Iceland. The model has been balanced against the rate of the tectonic plate motions and its seismic activity has been shown to be variable along the entire zone. Given the unknown fault locations, the model allows both for deterministic and random fault locations, and each fault is completely specified in terms of its maximum magnitude, its dimensions and its long-term slip and moment rate. In collaboration with ChEESE project, a realization of a 3000-year finite-fault earthquake catalogue based on the 3D finite-fault system model has been implemented in the CyberShake platform and the ground motion of each earthquake have been simulated for a dense grid of 594 stations. The simulation has been carried out on high-performance computing systems of the Barcelona Supercomputing Centre in Spain. The variation of hypocentral locations and slip distribution on each finite-fault has produced 18 million event-station pairs of synthetic two-horizontal-component low-frequency ground motion time histories that have just become available, those that are simulated less than 40 km from the faults contain near-fault high-amplitude velocity pulses at larger magnitudes, where actual data is nonexistent in Iceland (i.e., above 6.5). Therefore, the purpose of this study is to use a new and vast near-fault dataset of synthetic ground motions to develop a near-fault GMM using an advanced Bayesian Hierarchical Modeling (BHM) for Southwest Iceland.
How to cite: Bayat, F., Kowsari, M., Rojas, O., Monterrubio-Velasco, M., de la Puente, J., and Halldorsson, B.: A new Near-Fault Earthquake Ground Motion Model for Iceland from Bayesian Hierarchical Modeling , Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-41, https://doi.org/10.5194/egusphere-gc11-solidearth-41, 2023.
GC11-solidearth-42 | Orals | Session 2
A first look at the calibration of near-fault motion models to synthetic big data from CyberShake’s application to the Southwest Iceland transform zoneFarnaz Bayat, Milad Kowsari, Otilio Rojas, Marisol Monterrubio-Velasco, Josep de la Puente, and Benedikt Halldorsson
The strongest earthquakes in Iceland take place in its two large transform zones, the largest being up to magnitude 7.1. As a result, the earthquake hazard in Iceland is the highest in the transform zone. The capital region along with multiple small towns are either in close proximity or on top of the Southwest Iceland transform zone. As a result, the seismic risk is the highest in this region. A new physical 3D finite-fault system model has been developed that model strike-slip faulting in the transform zone as occurring on an array of north-south, near-vertical, dextral strike-slip faults and distributed along the entire transform zone with inter-fault distances ranging from 0.5-5 km. It is well-established that for near-vertical strike-slip faults, large-amplitude and long-period velocity pulses are found in the direction parallel and normal to the fault strike, respectively. The former is due to permanent tectonic displacement as a result of fault slip, and the latter is due to directivity effects. While the former is concentrated in close proximity to the fault and in particular the location of largest subevent of slip on the fault, the directivity pulse is found close to the fault ends and further away along the strike direction, either away from one end or both depending on if the fault rupture is uni- or bilateral, respectively. The forward directivity effect is generally considered to be the most damaging feature of the ground motions, particularly for long-period structures in the near-fault region. The recorded near-fault data in Iceland, however, is relatively sparse, making it difficult to accurately capture the physical characteristics of near-fault ground motions. However, in the ChEESE project we have implemented the new 3D finite-fault system into the CyberShake simulation platform and applied in the kinematic rupture modelling and the corresponding ground motion time history simulation. As a result, we have produced a vast dataset of synthetic ground motion time histories for Southwest Iceland. The synthetic dataset now contains near all possible permutations of near-fault effects and will now be parametrized to reveal the scaling of key near-fault ground motion parameters (e.g., amplitude of pseudo-acceleration spectral, peak ground velocity, and the period of the near-fault pulses) associated with the source (fault slip distribution, and fault plane geometry). This parametrization will increase our understanding of near-fault ground motion and allow the development of simple, but physically realistic near-fault GMM that find practical application in physics-based PSHA.
How to cite: Bayat, F., Kowsari, M., Rojas, O., Monterrubio-Velasco, M., de la Puente, J., and Halldorsson, B.: A first look at the calibration of near-fault motion models to synthetic big data from CyberShake’s application to the Southwest Iceland transform zone, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-42, https://doi.org/10.5194/egusphere-gc11-solidearth-42, 2023.
The strongest earthquakes in Iceland take place in its two large transform zones, the largest being up to magnitude 7.1. As a result, the earthquake hazard in Iceland is the highest in the transform zone. The capital region along with multiple small towns are either in close proximity or on top of the Southwest Iceland transform zone. As a result, the seismic risk is the highest in this region. A new physical 3D finite-fault system model has been developed that model strike-slip faulting in the transform zone as occurring on an array of north-south, near-vertical, dextral strike-slip faults and distributed along the entire transform zone with inter-fault distances ranging from 0.5-5 km. It is well-established that for near-vertical strike-slip faults, large-amplitude and long-period velocity pulses are found in the direction parallel and normal to the fault strike, respectively. The former is due to permanent tectonic displacement as a result of fault slip, and the latter is due to directivity effects. While the former is concentrated in close proximity to the fault and in particular the location of largest subevent of slip on the fault, the directivity pulse is found close to the fault ends and further away along the strike direction, either away from one end or both depending on if the fault rupture is uni- or bilateral, respectively. The forward directivity effect is generally considered to be the most damaging feature of the ground motions, particularly for long-period structures in the near-fault region. The recorded near-fault data in Iceland, however, is relatively sparse, making it difficult to accurately capture the physical characteristics of near-fault ground motions. However, in the ChEESE project we have implemented the new 3D finite-fault system into the CyberShake simulation platform and applied in the kinematic rupture modelling and the corresponding ground motion time history simulation. As a result, we have produced a vast dataset of synthetic ground motion time histories for Southwest Iceland. The synthetic dataset now contains near all possible permutations of near-fault effects and will now be parametrized to reveal the scaling of key near-fault ground motion parameters (e.g., amplitude of pseudo-acceleration spectral, peak ground velocity, and the period of the near-fault pulses) associated with the source (fault slip distribution, and fault plane geometry). This parametrization will increase our understanding of near-fault ground motion and allow the development of simple, but physically realistic near-fault GMM that find practical application in physics-based PSHA.
How to cite: Bayat, F., Kowsari, M., Rojas, O., Monterrubio-Velasco, M., de la Puente, J., and Halldorsson, B.: A first look at the calibration of near-fault motion models to synthetic big data from CyberShake’s application to the Southwest Iceland transform zone, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-42, https://doi.org/10.5194/egusphere-gc11-solidearth-42, 2023.
GC11-solidearth-43 | Poster | Session 2
Characterization of Earthquake Near-fault Ground Motion Parameters Using an Artificial Neural Network on Synthetic Big DataMilad Kowsari, Manuel Titos, Farnaz Bayat, Carmen Benitez, Marisol Monterrubio-Velasco, Otilio Rojas, Josep de la Puente, and Benedikt Halldorsson
The South Iceland Seismic Zone (SISZ) and Reykjanes Peninsula Oblique Rift (RPOR) in Southwest Iceland together form one of the two major transform zones in the country that have the greatest capacity for the occurrence of destructive earthquakes. Therefore, in these regions, the seismic hazard is highest and performing a probabilistic seismic hazard assessment (PSHA) is vital as the foundation of earthquake resistant building design and seismic risk mitigation. It is well known both from observations as well as physics-based (PB) modeling of earthquake rupture and near-fault ground motion simulations, that the most damaging part of near-fault seismic motion is the velocity pulse, the large-amplitude and long-period pulse-like ground motions found along the fault and away from the ends of strike-slip faults. Such motions cause intense earthquake action primarily on large buildings, such as hydroelectric power plants, dams, powerlines, bridges and pipelines. However, the data is still too limited to enable the reliable calibration of a physically realistic, yet parsimonious, near-fault model that incorporate such effects into empirical ground motions models (GMMs), thereby allowing their incorporation into a formal PSHA. However, in the recent European H2020 ChEESE project, we established a new 3D finite-fault system model for the SISZ-RPOR system that now has facilitated the simulation of finite-fault earthquake catalogues. Moreover, the catalogues have been implemented into the CyberShake platform, the PB earthquake simulator that was adapted to the characteristics of the SISZ-RPOR earthquakes in the ChEESE project. The seismic ground motions of each earthquake in the catalogue have thus been simulated on a dense grid of 594 near-fault stations in Southwest Iceland. The simulation has been carried out on high-performance computing systems of the Barcelona Supercomputing Centre in Spain. Moreover, the hypocentral locations and slip distributions on each synthetic fault have been varied, resulting in approximately 1 million earthquake-station-specific pairs of synthetic low-frequency and high-amplitude near-fault ground motion time histories. In this study, we analyse this dataset using an artificial neural network to reveal its characteristics in terms of amplitudes and the characteristics of near-fault velocity pulses, capturing all key features of such effects. The results will facilitate the incorporation of the near-fault effects into new near-fault and far-field GMMs, that are a key element of conventional PSHA. This will both enable the near-fault PB-PSHA along with the comparison of PSHA from the synthetic dataset vs. the GMMs. This will usher in a new era of PB-PSHA in Iceland.
How to cite: Kowsari, M., Titos, M., Bayat, F., Benitez, C., Monterrubio-Velasco, M., Rojas, O., de la Puente, J., and Halldorsson, B.: Characterization of Earthquake Near-fault Ground Motion Parameters Using an Artificial Neural Network on Synthetic Big Data, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-43, https://doi.org/10.5194/egusphere-gc11-solidearth-43, 2023.
The South Iceland Seismic Zone (SISZ) and Reykjanes Peninsula Oblique Rift (RPOR) in Southwest Iceland together form one of the two major transform zones in the country that have the greatest capacity for the occurrence of destructive earthquakes. Therefore, in these regions, the seismic hazard is highest and performing a probabilistic seismic hazard assessment (PSHA) is vital as the foundation of earthquake resistant building design and seismic risk mitigation. It is well known both from observations as well as physics-based (PB) modeling of earthquake rupture and near-fault ground motion simulations, that the most damaging part of near-fault seismic motion is the velocity pulse, the large-amplitude and long-period pulse-like ground motions found along the fault and away from the ends of strike-slip faults. Such motions cause intense earthquake action primarily on large buildings, such as hydroelectric power plants, dams, powerlines, bridges and pipelines. However, the data is still too limited to enable the reliable calibration of a physically realistic, yet parsimonious, near-fault model that incorporate such effects into empirical ground motions models (GMMs), thereby allowing their incorporation into a formal PSHA. However, in the recent European H2020 ChEESE project, we established a new 3D finite-fault system model for the SISZ-RPOR system that now has facilitated the simulation of finite-fault earthquake catalogues. Moreover, the catalogues have been implemented into the CyberShake platform, the PB earthquake simulator that was adapted to the characteristics of the SISZ-RPOR earthquakes in the ChEESE project. The seismic ground motions of each earthquake in the catalogue have thus been simulated on a dense grid of 594 near-fault stations in Southwest Iceland. The simulation has been carried out on high-performance computing systems of the Barcelona Supercomputing Centre in Spain. Moreover, the hypocentral locations and slip distributions on each synthetic fault have been varied, resulting in approximately 1 million earthquake-station-specific pairs of synthetic low-frequency and high-amplitude near-fault ground motion time histories. In this study, we analyse this dataset using an artificial neural network to reveal its characteristics in terms of amplitudes and the characteristics of near-fault velocity pulses, capturing all key features of such effects. The results will facilitate the incorporation of the near-fault effects into new near-fault and far-field GMMs, that are a key element of conventional PSHA. This will both enable the near-fault PB-PSHA along with the comparison of PSHA from the synthetic dataset vs. the GMMs. This will usher in a new era of PB-PSHA in Iceland.
How to cite: Kowsari, M., Titos, M., Bayat, F., Benitez, C., Monterrubio-Velasco, M., Rojas, O., de la Puente, J., and Halldorsson, B.: Characterization of Earthquake Near-fault Ground Motion Parameters Using an Artificial Neural Network on Synthetic Big Data, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-43, https://doi.org/10.5194/egusphere-gc11-solidearth-43, 2023.
GC11-solidearth-44 | Poster | Session 2
Towards physics-based finite-fault Monte Carlo PSHA for Southwest Iceland based on a new fault system modelMilad Kowsari and Benedikt Halldorsson
Throughout history, damaging earthquakes have repeatedly struck in Southwest Iceland, the country’s most populated and seismically active region. There, the interplate earthquakes do not occur on sinistral strike-slip faults parallel to the plate margin, but instead on a dense array of near-vertical dextral faults striking perpendicular to the plate margin. This “bookshelf” faulting has not explicitly been accounted for in probabilistic seismic hazard assessment (PSHA). Instead, incomplete earthquake catalogues and simplistic seismic sources have been used in past PSHA that have used conventional methods. Recently however, a new and physics-based 3D fault system model of the Southwest Iceland transform zone has been proposed that effectively explains the observed Icelandic earthquake catalogues. The model moreover allows the systematic spatial variation of fault slip-rates to be modeled by discrete subzonation of the fault system and the equivalent parameters of seismic activity (Mmax, a- and b-values). Through random realizations of fault locations as postulated by the new model, we have simulated multiple finite-fault earthquake catalogues for the entire bookshelf system for earthquakes ranging from magnitude 4 to 7. This in fact allows us to apply conventional PSHA but instead of using e.g. seismic point sources distributed over a designated seismic source areas, the seismic activity of which is predicted by limited historical catalogues, the synthetic finite-fault catalogues are time-independent and embody fully the first two key elements of PSHA, the seismic source locations along with their activity rates. Using multiple empirical hybrid Bayesian ground motion models (GMMs) that recently have been proposed for Southwest Iceland we have predicted the amplitudes (peak ground accelerations and pseudo-acceleration spectral response) from each synthetic finite-fault earthquake on a grid of hypothetical stations. This enables us to carry out a Monte Carlo PSHA that is based on a physical earthquake fault system model. We present the provisional PSHA results for Southwest Iceland and compare them to other relevant efforts, the Icelandic National Annex to Eurocode 8 and the ESHM20, but most importantly to those of a parallel study that carries out a physics-based PSHA based on synthetic ground motion time histories (on the same hypothetical network) from kinematic earthquake rupture modeling (on the same finite-fault earthquake catalogues) implemented in the CyberShake framework adapted to the Southwest Iceland tectonic situation and earthquake source scaling.
How to cite: Kowsari, M. and Halldorsson, B.: Towards physics-based finite-fault Monte Carlo PSHA for Southwest Iceland based on a new fault system model, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-44, https://doi.org/10.5194/egusphere-gc11-solidearth-44, 2023.
Throughout history, damaging earthquakes have repeatedly struck in Southwest Iceland, the country’s most populated and seismically active region. There, the interplate earthquakes do not occur on sinistral strike-slip faults parallel to the plate margin, but instead on a dense array of near-vertical dextral faults striking perpendicular to the plate margin. This “bookshelf” faulting has not explicitly been accounted for in probabilistic seismic hazard assessment (PSHA). Instead, incomplete earthquake catalogues and simplistic seismic sources have been used in past PSHA that have used conventional methods. Recently however, a new and physics-based 3D fault system model of the Southwest Iceland transform zone has been proposed that effectively explains the observed Icelandic earthquake catalogues. The model moreover allows the systematic spatial variation of fault slip-rates to be modeled by discrete subzonation of the fault system and the equivalent parameters of seismic activity (Mmax, a- and b-values). Through random realizations of fault locations as postulated by the new model, we have simulated multiple finite-fault earthquake catalogues for the entire bookshelf system for earthquakes ranging from magnitude 4 to 7. This in fact allows us to apply conventional PSHA but instead of using e.g. seismic point sources distributed over a designated seismic source areas, the seismic activity of which is predicted by limited historical catalogues, the synthetic finite-fault catalogues are time-independent and embody fully the first two key elements of PSHA, the seismic source locations along with their activity rates. Using multiple empirical hybrid Bayesian ground motion models (GMMs) that recently have been proposed for Southwest Iceland we have predicted the amplitudes (peak ground accelerations and pseudo-acceleration spectral response) from each synthetic finite-fault earthquake on a grid of hypothetical stations. This enables us to carry out a Monte Carlo PSHA that is based on a physical earthquake fault system model. We present the provisional PSHA results for Southwest Iceland and compare them to other relevant efforts, the Icelandic National Annex to Eurocode 8 and the ESHM20, but most importantly to those of a parallel study that carries out a physics-based PSHA based on synthetic ground motion time histories (on the same hypothetical network) from kinematic earthquake rupture modeling (on the same finite-fault earthquake catalogues) implemented in the CyberShake framework adapted to the Southwest Iceland tectonic situation and earthquake source scaling.
How to cite: Kowsari, M. and Halldorsson, B.: Towards physics-based finite-fault Monte Carlo PSHA for Southwest Iceland based on a new fault system model, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-44, https://doi.org/10.5194/egusphere-gc11-solidearth-44, 2023.
GC11-solidearth-45 | Poster | Session 2
Feasibility of Multiple Advanced Machine Learning Techniques for Synthetic Finite-fault Earthquake Ground Motion DataManuel Titos Luzon, Milad Kowsari, and Benedikt Halldorsson
The overwhelming success of data-driven models to solve complex predictive real-world problems has made them an effective alternative to the simulation-driven models. In addition to their computational cost, simulation-driven approaches need to be calibrated by actual data that links both to the physical theory and thereby improves both our knowledge and existing models. Similarly, data-driven methods allow deepening the knowledge by analyzing existing data. Therefore, to develop improved predictive models, one needs to pursue a balance between both data-driven and simulation-driven approaches, keeping data as a common pivot. This can be done by using Machine Learning (ML) which is a powerful tool to extract knowledge directly from the data and provide complementary information to the previously developed physics-based models. The main advantage of ML methods is their ability to process massive and complex structure data sets that are difficult to be processed by traditional data-processing methods. Therein lies also the main disadvantage of ML methods i.e., they need massive amounts of data that often are not available. In this study however we take advantage of a new physics-based model of the earthquake fault system of the Southwest Iceland transform zone and generate synthetic, but physically realistic, finite-fault earthquake catalogues. For each earthquake in the catalogues we simulate seismicground motion parameters at a large hypothetical station network thus generating a massive parametric dataset of synthetic seismic data from earthquakes of magnitude 5 to 7. We will apply multiple types of new generation machine learning techniques such as deep neural network (DNN), deep Bayesian neural networks (DBNN) and deep Gaussian processes (DGP) to investigate the ability and efficiency of the methods in capturing the characteristics of the synthetic dataset in terms of key parameters e.g., ground motion amplitudes, ground motion attenuation versus source-to-site distance and site effects and independent parameters such as earthquake magnitude, fault extend and depth, etc. The ML methods will be trained using a procedure known as greedy layer-wise pretraining where each layer is initialized via the unsupervised pretraining, and the output of the previous layer can be used as the input for the next one. A typical advantage of these pre-trained networks compared to the other deep learning models is that the weights initialization renders the optimization process more effective, providing faster convergence by initializing the network parameters near a convergence region. This can help to avoid underfitting/overfitting problems when the training samples are highly correlated. The results will provide a new insight into the efficiency and usefulness of ML methods on synthetic seismic datasets with implications for their use on actual and more sparser datasets.
How to cite: Titos Luzon, M., Kowsari, M., and Halldorsson, B.: Feasibility of Multiple Advanced Machine Learning Techniques for Synthetic Finite-fault Earthquake Ground Motion Data, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-45, https://doi.org/10.5194/egusphere-gc11-solidearth-45, 2023.
The overwhelming success of data-driven models to solve complex predictive real-world problems has made them an effective alternative to the simulation-driven models. In addition to their computational cost, simulation-driven approaches need to be calibrated by actual data that links both to the physical theory and thereby improves both our knowledge and existing models. Similarly, data-driven methods allow deepening the knowledge by analyzing existing data. Therefore, to develop improved predictive models, one needs to pursue a balance between both data-driven and simulation-driven approaches, keeping data as a common pivot. This can be done by using Machine Learning (ML) which is a powerful tool to extract knowledge directly from the data and provide complementary information to the previously developed physics-based models. The main advantage of ML methods is their ability to process massive and complex structure data sets that are difficult to be processed by traditional data-processing methods. Therein lies also the main disadvantage of ML methods i.e., they need massive amounts of data that often are not available. In this study however we take advantage of a new physics-based model of the earthquake fault system of the Southwest Iceland transform zone and generate synthetic, but physically realistic, finite-fault earthquake catalogues. For each earthquake in the catalogues we simulate seismicground motion parameters at a large hypothetical station network thus generating a massive parametric dataset of synthetic seismic data from earthquakes of magnitude 5 to 7. We will apply multiple types of new generation machine learning techniques such as deep neural network (DNN), deep Bayesian neural networks (DBNN) and deep Gaussian processes (DGP) to investigate the ability and efficiency of the methods in capturing the characteristics of the synthetic dataset in terms of key parameters e.g., ground motion amplitudes, ground motion attenuation versus source-to-site distance and site effects and independent parameters such as earthquake magnitude, fault extend and depth, etc. The ML methods will be trained using a procedure known as greedy layer-wise pretraining where each layer is initialized via the unsupervised pretraining, and the output of the previous layer can be used as the input for the next one. A typical advantage of these pre-trained networks compared to the other deep learning models is that the weights initialization renders the optimization process more effective, providing faster convergence by initializing the network parameters near a convergence region. This can help to avoid underfitting/overfitting problems when the training samples are highly correlated. The results will provide a new insight into the efficiency and usefulness of ML methods on synthetic seismic datasets with implications for their use on actual and more sparser datasets.
How to cite: Titos Luzon, M., Kowsari, M., and Halldorsson, B.: Feasibility of Multiple Advanced Machine Learning Techniques for Synthetic Finite-fault Earthquake Ground Motion Data, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-45, https://doi.org/10.5194/egusphere-gc11-solidearth-45, 2023.
GC11-solidearth-48 | Poster | Session 2
Monitoring the sediment dynamics of Maltese beaches. The SIPOBED project and its future challenges.Luciano Galone, Emanuele Colica, Peter Iregbeyen, Luca Piroddi, Deidun Alan, Gianluca Valentino, Adam Gauci, and Sebastiano D’Amico
Pocket beaches are small beaches bounded by natural promontories, free from direct sedimentary inputs other than those coming from the erosion of their cliffs.
Malta's pocket beaches are one of the most significant geomorphological features of the archipelago. They play an important role for a variety of ecological and economic reasons. Sediment dynamics (mainly sand) is one of the most relevant factors to be considered in those beach system. As the pocket beach system behaves as an integrated unit, periodic bathymetric monitoring is essential - and challenging - from an environmental management perspective.
The SIPOBED project (Satellite Investigation to study POcket BEach Dynamics) develops an integrated tool capable of monitoring sediment dynamics using SAR and digital photogrammetry to monitor beach topographic variations and multispectral UAV and satellite images to derive bathymetry.
Obtaining updated in situ bathymetric measurements is essential to calibrate and re-calibrate the model over time and conduct more actualized and accurate multispectral-derived bathymetry.
In this context, the collection of data by citizens, for instance, bathymetric data collected by private boats abundant in the archipelago, in conjunction with the processing power of modern computing, represents the new challenge of Maltese pocket beach monitoring.
The SIPOBED project is financed by the Malta Council for Science and Technology (MCST, https://mcst.gov.mt/) through the Space Research Fund (Building capacity in the downstream Earth Observation Sector), a programme supported by the European Space Agency.
How to cite: Galone, L., Colica, E., Iregbeyen, P., Piroddi, L., Alan, D., Valentino, G., Gauci, A., and D’Amico, S.: Monitoring the sediment dynamics of Maltese beaches. The SIPOBED project and its future challenges., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-48, https://doi.org/10.5194/egusphere-gc11-solidearth-48, 2023.
Pocket beaches are small beaches bounded by natural promontories, free from direct sedimentary inputs other than those coming from the erosion of their cliffs.
Malta's pocket beaches are one of the most significant geomorphological features of the archipelago. They play an important role for a variety of ecological and economic reasons. Sediment dynamics (mainly sand) is one of the most relevant factors to be considered in those beach system. As the pocket beach system behaves as an integrated unit, periodic bathymetric monitoring is essential - and challenging - from an environmental management perspective.
The SIPOBED project (Satellite Investigation to study POcket BEach Dynamics) develops an integrated tool capable of monitoring sediment dynamics using SAR and digital photogrammetry to monitor beach topographic variations and multispectral UAV and satellite images to derive bathymetry.
Obtaining updated in situ bathymetric measurements is essential to calibrate and re-calibrate the model over time and conduct more actualized and accurate multispectral-derived bathymetry.
In this context, the collection of data by citizens, for instance, bathymetric data collected by private boats abundant in the archipelago, in conjunction with the processing power of modern computing, represents the new challenge of Maltese pocket beach monitoring.
The SIPOBED project is financed by the Malta Council for Science and Technology (MCST, https://mcst.gov.mt/) through the Space Research Fund (Building capacity in the downstream Earth Observation Sector), a programme supported by the European Space Agency.
How to cite: Galone, L., Colica, E., Iregbeyen, P., Piroddi, L., Alan, D., Valentino, G., Gauci, A., and D’Amico, S.: Monitoring the sediment dynamics of Maltese beaches. The SIPOBED project and its future challenges., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-48, https://doi.org/10.5194/egusphere-gc11-solidearth-48, 2023.
Session 3 – State-of-the-art in computational geosciences
GC11-solidearth-1 | Poster | Session 3
Climate Adaptation and Disaster Assessment using Deep Learning and Earth ObservationThomas Y. Chen
Due to anthropogenic climate change, the frequency and intensity of natural disasters is only increasing. As supercomputing capabilities increase in an era of computational geosciences, artificial intelligence has emerged as a key tool in assessing the progression and impact of these disasters. Recovery from extreme weather events is aided by machine learning-based systems trained on multitemporal satellite imagery data. We work on shifting paradigms by seeking to understand the inner decision-making process (interpretability) of convolutional neural networks (CNNs) for damage assessment in buildings after natural disasters, as these deep learning algorithms are typically black boxes. We compare the efficacy of models trained on different input modalities, including combinations of the pre-disaster image, the post-disaster image, the disaster type, and the ground truth of neighboring buildings. Furthermore, we experiment with different loss functions, and find that ordinal cross entropy loss is the most effective criterion for optimization. Finally, we visualize inputs by creating gradient-weighted class activation mapping (Grad-CAM) on the data, with the end goal of deployment. Earth observation data harnessed by deep learning and computer vision is not only useful for disaster assessment, but also in understanding the other impacts of our changing climate from marine ecology to agriculture in the Global South.
How to cite: Chen, T. Y.: Climate Adaptation and Disaster Assessment using Deep Learning and Earth Observation, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-1, https://doi.org/10.5194/egusphere-gc11-solidearth-1, 2023.
Due to anthropogenic climate change, the frequency and intensity of natural disasters is only increasing. As supercomputing capabilities increase in an era of computational geosciences, artificial intelligence has emerged as a key tool in assessing the progression and impact of these disasters. Recovery from extreme weather events is aided by machine learning-based systems trained on multitemporal satellite imagery data. We work on shifting paradigms by seeking to understand the inner decision-making process (interpretability) of convolutional neural networks (CNNs) for damage assessment in buildings after natural disasters, as these deep learning algorithms are typically black boxes. We compare the efficacy of models trained on different input modalities, including combinations of the pre-disaster image, the post-disaster image, the disaster type, and the ground truth of neighboring buildings. Furthermore, we experiment with different loss functions, and find that ordinal cross entropy loss is the most effective criterion for optimization. Finally, we visualize inputs by creating gradient-weighted class activation mapping (Grad-CAM) on the data, with the end goal of deployment. Earth observation data harnessed by deep learning and computer vision is not only useful for disaster assessment, but also in understanding the other impacts of our changing climate from marine ecology to agriculture in the Global South.
How to cite: Chen, T. Y.: Climate Adaptation and Disaster Assessment using Deep Learning and Earth Observation, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-1, https://doi.org/10.5194/egusphere-gc11-solidearth-1, 2023.
GC11-solidearth-61 | Orals | Session 3
Simulation of Geological CO2 Storage with the GEOS Open-Source Multiphysics SimulatorNicola Castelletto
Carbon capture and storage (CCS) is one of the most important technologies to achieve large-scale reduction in global carbon dioxide (CO2) emissions. The essence of CCS is to capture CO2 produced at power plants and industrial facilities and transport it to safe, permanent storage deep underground. Reducing CO2 emissions into the atmosphere is crucial to cut the carbon footprint of our society. The evaluation of CO2 storage candidate sites requires predictive simulation capabilities to assess site capacity and safety. We present an overview of the GEOS multiphysics simulation platform, an open-source simulator capable of serving as the computational engine for CCS evaluation workflows. We will discuss the development path of GEOS, and motivations to transition from a collection of smaller single-institution code development efforts to a multi-institution collaboration. We will describe the development of a discretization-data infrastructure, a standardized approach to solving single and coupled physics problems, and a strategy to achieve reasonable levels of performance portability across hardware platforms. We will outline the approach to documentation, and planned method of user interaction as the growth of that user base accelerates.
Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07-NA27344.
How to cite: Castelletto, N.: Simulation of Geological CO2 Storage with the GEOS Open-Source Multiphysics Simulator, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-61, https://doi.org/10.5194/egusphere-gc11-solidearth-61, 2023.
Carbon capture and storage (CCS) is one of the most important technologies to achieve large-scale reduction in global carbon dioxide (CO2) emissions. The essence of CCS is to capture CO2 produced at power plants and industrial facilities and transport it to safe, permanent storage deep underground. Reducing CO2 emissions into the atmosphere is crucial to cut the carbon footprint of our society. The evaluation of CO2 storage candidate sites requires predictive simulation capabilities to assess site capacity and safety. We present an overview of the GEOS multiphysics simulation platform, an open-source simulator capable of serving as the computational engine for CCS evaluation workflows. We will discuss the development path of GEOS, and motivations to transition from a collection of smaller single-institution code development efforts to a multi-institution collaboration. We will describe the development of a discretization-data infrastructure, a standardized approach to solving single and coupled physics problems, and a strategy to achieve reasonable levels of performance portability across hardware platforms. We will outline the approach to documentation, and planned method of user interaction as the growth of that user base accelerates.
Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07-NA27344.
How to cite: Castelletto, N.: Simulation of Geological CO2 Storage with the GEOS Open-Source Multiphysics Simulator, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-61, https://doi.org/10.5194/egusphere-gc11-solidearth-61, 2023.
GC11-solidearth-20 | Poster | Session 3
Operationalizing deployable hazard detection technology based on machine learningThomas Y. Chen and Luke Houtz
As climate change advances, machine learning approaches have been developed to detect and assess impacts to infrastructure and communities by natural hazards and extreme weather events. One especially useful source of data is Earth observation and satellite imagery. For example, deep neural networks are trained on multitemporal Earth observation data to be able to perform inference on pairs of high-resolution pre-earthquake and post-earthquake imagery and output a prediction of the damage severity for any given area. Different modalities of imagery, such as infrared imagery, can be particularly useful for detecting damage. As state-of-the-art models are trained and published in the scientific literature, it is important to consider the deployability of the algorithms. Key bottlenecks to successful deployment in the exascale era include the interpretability of neural networks, computer visualization of outputs, as well as runtime dependencies of the model and memory consumption. An additional consideration is the climate impact of the significant computing resources required by large, complex models that are supposed to aid in climate adaptation. We discuss various methods of real-world deployment, including the use of drones that analyze multitemporal change in real time. Finally, we emphasize the importance of bias mitigation in machine learning and computer vision models, examining recent cutting-edge techniques like the REVISE tool, which thoroughly probes geographical biases in big data. This is required because AI requires a large quantity of data and the predictions on unseen data it makes are contingent on the data it has already seen.
How to cite: Chen, T. Y. and Houtz, L.: Operationalizing deployable hazard detection technology based on machine learning, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-20, https://doi.org/10.5194/egusphere-gc11-solidearth-20, 2023.
As climate change advances, machine learning approaches have been developed to detect and assess impacts to infrastructure and communities by natural hazards and extreme weather events. One especially useful source of data is Earth observation and satellite imagery. For example, deep neural networks are trained on multitemporal Earth observation data to be able to perform inference on pairs of high-resolution pre-earthquake and post-earthquake imagery and output a prediction of the damage severity for any given area. Different modalities of imagery, such as infrared imagery, can be particularly useful for detecting damage. As state-of-the-art models are trained and published in the scientific literature, it is important to consider the deployability of the algorithms. Key bottlenecks to successful deployment in the exascale era include the interpretability of neural networks, computer visualization of outputs, as well as runtime dependencies of the model and memory consumption. An additional consideration is the climate impact of the significant computing resources required by large, complex models that are supposed to aid in climate adaptation. We discuss various methods of real-world deployment, including the use of drones that analyze multitemporal change in real time. Finally, we emphasize the importance of bias mitigation in machine learning and computer vision models, examining recent cutting-edge techniques like the REVISE tool, which thoroughly probes geographical biases in big data. This is required because AI requires a large quantity of data and the predictions on unseen data it makes are contingent on the data it has already seen.
How to cite: Chen, T. Y. and Houtz, L.: Operationalizing deployable hazard detection technology based on machine learning, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-20, https://doi.org/10.5194/egusphere-gc11-solidearth-20, 2023.
The second phase (2023-2026) of the EuroHPC Center of Excellence for Exascale in Solid Earth (ChEESE-2P), funded by HORIZON-EUROHPC-JU-2021-COE-01 under the Grant Agreement No 101093038, will prepare 11 European flagship codes from different geoscience domains. Codes will be optimised in terms of performance on different types of accelerators, scalability, containerisation, and continuous deployment and portability across tier-0/tier-1 European systems as well as on novel hardware architectures emerging from the EuroHPC Pilots (EuPEX/OpenSequana and EuPilot/RISC-V) by co-designing with mini-apps. Flagship codes and workflows will be combined to farm a new generation of 9 Pilot Demonstrators (PDs) and 15 related Simulation Cases (SCs) representing capability and capacity computational challenges selected based on their scientific importance, social relevance, or urgency. On the other hand, the first phase of ChEESE was pivotal in leveraging an ecosystem of European projects and initiatives tackling computational geohazards that will benefit from current and upcoming exascale EuroHPC infrastructures. In particular, Geo-INQUIRE (2022-2024, GA No 101058518) and DT-GEO (2022-2025, GA No 101058129) are two on-going Horizon Europe projects relevant to the Solid Earth ecosystem. The former will provide virtual and trans-national service access to data and state-of-the-art numerical models and workflows for monitoring and simulation of the dynamic processes in the geosphere at unprecedented levels of detail and precision. The later will deploy a prototype Digital Twin (DT) on geophysical extremes including 12 self-contained Digital Twin Components (DTCs) addressing specific hazardous phenomena from volcanoes, tsunamis, earthquakes, and anthropogenically-induced extremes to conduct precise data-informed early warning systems, forecasts, and hazard assessments across multiple time scales. All these initiatives liaise, align, and synergise with EPOS and longer-term mission-like initiatives like Destination Earth.
How to cite: Folch, A.: HPC projects in the Solid Earth ecosystem, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-22, https://doi.org/10.5194/egusphere-gc11-solidearth-22, 2023.
The second phase (2023-2026) of the EuroHPC Center of Excellence for Exascale in Solid Earth (ChEESE-2P), funded by HORIZON-EUROHPC-JU-2021-COE-01 under the Grant Agreement No 101093038, will prepare 11 European flagship codes from different geoscience domains. Codes will be optimised in terms of performance on different types of accelerators, scalability, containerisation, and continuous deployment and portability across tier-0/tier-1 European systems as well as on novel hardware architectures emerging from the EuroHPC Pilots (EuPEX/OpenSequana and EuPilot/RISC-V) by co-designing with mini-apps. Flagship codes and workflows will be combined to farm a new generation of 9 Pilot Demonstrators (PDs) and 15 related Simulation Cases (SCs) representing capability and capacity computational challenges selected based on their scientific importance, social relevance, or urgency. On the other hand, the first phase of ChEESE was pivotal in leveraging an ecosystem of European projects and initiatives tackling computational geohazards that will benefit from current and upcoming exascale EuroHPC infrastructures. In particular, Geo-INQUIRE (2022-2024, GA No 101058518) and DT-GEO (2022-2025, GA No 101058129) are two on-going Horizon Europe projects relevant to the Solid Earth ecosystem. The former will provide virtual and trans-national service access to data and state-of-the-art numerical models and workflows for monitoring and simulation of the dynamic processes in the geosphere at unprecedented levels of detail and precision. The later will deploy a prototype Digital Twin (DT) on geophysical extremes including 12 self-contained Digital Twin Components (DTCs) addressing specific hazardous phenomena from volcanoes, tsunamis, earthquakes, and anthropogenically-induced extremes to conduct precise data-informed early warning systems, forecasts, and hazard assessments across multiple time scales. All these initiatives liaise, align, and synergise with EPOS and longer-term mission-like initiatives like Destination Earth.
How to cite: Folch, A.: HPC projects in the Solid Earth ecosystem, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-22, https://doi.org/10.5194/egusphere-gc11-solidearth-22, 2023.
GC11-solidearth-14 | Orals | Session 3
Complete workflow for tsunami simulation and hazard calculation in urgent computing using HPC servicesLouise Cordrie, Jacopo Selva, Fabrizio Bernardi, Roberto Tonini, Jorge Macías Sánchez, Carlos Sánchez Linares, Steven Gibbons, and Finn Løvholt
Tsunami urgent computing procedures quantify the potential hazard due to a seismically-induced tsunami right after an earthquake, that is from minutes to a few hours. The hazard is quantified by simulating the tsunami from source to shore, taking into account the uncertainty in the source parameters and the uncertainty associated with the wave generation, propagation, and inundation.
In the European eFlows4HPC project, an HPC workflow for urgent computing of tsunami hazard assessment is currently being developed, consisting of the following steps: 1) retrieval of parameters for the tsunamigenic earthquake (magnitude, hypocentre and associated uncertainties), 2) definition of a seismic source ensemble, 3) simulation of the tsunami generated by each scenario in the ensemble, 4) aggregation of the results to produce an estimate of tsunami hazard, which also incorporates a basic treatment of uncertainty modelling and 5) update of the ensemble based on incoming data.
Initially implemented on the Power-9 machine at BSC, the workflow has been fully embedded into a PyCOMPSs framework that enables parallel task execution and integrates full tsunami simulations for the first time. The tsunami numerical model (Tsunami-HySEA) computes the tsunami from the source to coastal impact using nested grids with resolution from kilometres to meters.
To limit the number of simulations and converge faster towards stable hazard estimates, new methods for defining the seismic source ensembles have been developed. When applied to several past earthquakes and tsunamis (e.g., the 2003 Boumerdes and the 2017 Kos-Bodrum earthquakes), our new sampling strategy yielded a reduction of 1 or 2 orders of magnitude for ensemble size, allowing a drastic reduction in the computational effort. This reduction may be exploited to improve tsunami simulation accuracy, increasing the computational effort available for each simulation for the same overall cost. The workflow also allows the integration of new incoming data (focal mechanism, seismic or tsunami records) for an “on the fly” update of the PTF based on this new information.
The improvement of the workflow through a well-defined ensemble of scenarios, realistic simulations and integration of incoming data, strongly reduces the uncertainty and yields to an update of the probabilistic forecasts without compromising theiraccuracy. This can be crucial in mitigating the risk far from the seismic source, and in improving risk management by better informing decision-making in an emergency framework.
How to cite: Cordrie, L., Selva, J., Bernardi, F., Tonini, R., Macías Sánchez, J., Sánchez Linares, C., Gibbons, S., and Løvholt, F.: Complete workflow for tsunami simulation and hazard calculation in urgent computing using HPC services, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-14, https://doi.org/10.5194/egusphere-gc11-solidearth-14, 2023.
Tsunami urgent computing procedures quantify the potential hazard due to a seismically-induced tsunami right after an earthquake, that is from minutes to a few hours. The hazard is quantified by simulating the tsunami from source to shore, taking into account the uncertainty in the source parameters and the uncertainty associated with the wave generation, propagation, and inundation.
In the European eFlows4HPC project, an HPC workflow for urgent computing of tsunami hazard assessment is currently being developed, consisting of the following steps: 1) retrieval of parameters for the tsunamigenic earthquake (magnitude, hypocentre and associated uncertainties), 2) definition of a seismic source ensemble, 3) simulation of the tsunami generated by each scenario in the ensemble, 4) aggregation of the results to produce an estimate of tsunami hazard, which also incorporates a basic treatment of uncertainty modelling and 5) update of the ensemble based on incoming data.
Initially implemented on the Power-9 machine at BSC, the workflow has been fully embedded into a PyCOMPSs framework that enables parallel task execution and integrates full tsunami simulations for the first time. The tsunami numerical model (Tsunami-HySEA) computes the tsunami from the source to coastal impact using nested grids with resolution from kilometres to meters.
To limit the number of simulations and converge faster towards stable hazard estimates, new methods for defining the seismic source ensembles have been developed. When applied to several past earthquakes and tsunamis (e.g., the 2003 Boumerdes and the 2017 Kos-Bodrum earthquakes), our new sampling strategy yielded a reduction of 1 or 2 orders of magnitude for ensemble size, allowing a drastic reduction in the computational effort. This reduction may be exploited to improve tsunami simulation accuracy, increasing the computational effort available for each simulation for the same overall cost. The workflow also allows the integration of new incoming data (focal mechanism, seismic or tsunami records) for an “on the fly” update of the PTF based on this new information.
The improvement of the workflow through a well-defined ensemble of scenarios, realistic simulations and integration of incoming data, strongly reduces the uncertainty and yields to an update of the probabilistic forecasts without compromising theiraccuracy. This can be crucial in mitigating the risk far from the seismic source, and in improving risk management by better informing decision-making in an emergency framework.
How to cite: Cordrie, L., Selva, J., Bernardi, F., Tonini, R., Macías Sánchez, J., Sánchez Linares, C., Gibbons, S., and Løvholt, F.: Complete workflow for tsunami simulation and hazard calculation in urgent computing using HPC services, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-14, https://doi.org/10.5194/egusphere-gc11-solidearth-14, 2023.
GC11-solidearth-2 | Poster | Session 3
Contribution of geomatic tools for the study of geological control of ground movements in the province of Al Hoceima - Northern MoroccoAyyoub Sbihi, Hajar El talibi, Hasnaa Harmouzi, and Mohamed Mastére
Morocco is one of the countries with a long geological history, tracing several orogenies. The most recent, called alpine, was at the origin of the formation of the Rifian chain by the collision of the two tectonic plates African and Eurasian. This activity continues to predominate because of the continuous approximation of the plates and the punching of the Alboran microplate. This results, among other things, in the decompression of rock masses and the reopening of inherited discontinuities. These, being associated with other soil-geological, climatic parameters, topographical and anthropogenic, make the Rif unquestionably the area most exposed to natural hazards including phenomenal of land instability. The effects of this hydro-gravity hazard are all the more important when they affect more or less vulnerable inhabited areas.
The region of Al Hoceima is part of the Rif, it presents several indices of instabilities. While some areas remain relatively stable, others are subject to factors that may generate ground movement.
The objective of this work is to analyze the relationship between the mapped ground movements of Al Hoceima province and the key geological parameters, namely lithology, and fracturing. Using the GIS tools, we analyzed the spatial distribution with the different classes of the two parameters mentioned above, using a two-stage geostatistical analysis.
Keywords : Risk, Cartography, GIS, Remote sensing, ground movements, Geostatistics, Al Hoceima.
How to cite: Sbihi, A., El talibi, H., Harmouzi, H., and Mastére, M.: Contribution of geomatic tools for the study of geological control of ground movements in the province of Al Hoceima - Northern Morocco, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-2, https://doi.org/10.5194/egusphere-gc11-solidearth-2, 2023.
Morocco is one of the countries with a long geological history, tracing several orogenies. The most recent, called alpine, was at the origin of the formation of the Rifian chain by the collision of the two tectonic plates African and Eurasian. This activity continues to predominate because of the continuous approximation of the plates and the punching of the Alboran microplate. This results, among other things, in the decompression of rock masses and the reopening of inherited discontinuities. These, being associated with other soil-geological, climatic parameters, topographical and anthropogenic, make the Rif unquestionably the area most exposed to natural hazards including phenomenal of land instability. The effects of this hydro-gravity hazard are all the more important when they affect more or less vulnerable inhabited areas.
The region of Al Hoceima is part of the Rif, it presents several indices of instabilities. While some areas remain relatively stable, others are subject to factors that may generate ground movement.
The objective of this work is to analyze the relationship between the mapped ground movements of Al Hoceima province and the key geological parameters, namely lithology, and fracturing. Using the GIS tools, we analyzed the spatial distribution with the different classes of the two parameters mentioned above, using a two-stage geostatistical analysis.
Keywords : Risk, Cartography, GIS, Remote sensing, ground movements, Geostatistics, Al Hoceima.
How to cite: Sbihi, A., El talibi, H., Harmouzi, H., and Mastére, M.: Contribution of geomatic tools for the study of geological control of ground movements in the province of Al Hoceima - Northern Morocco, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-2, https://doi.org/10.5194/egusphere-gc11-solidearth-2, 2023.
GC11-solidearth-35 | Orals | Session 3
Can we model lava flows faster than real-time to assist on a first volcanic emergency response?Carlos Paredes, Marcos David Márquez, and Miguel Llorente
Remote sensing data and numerical simulation models of lava flows have been combined to assess the possibility of rapid, real-time response during the effusive crisis of the recent Cumbre Vieja (Sep 19 - December 13, 2021) eruptive episode. Here, we use the monitoring products openly distributed by COPERNICUS through the Emergency Management Service (EMSR546) and by the Cabildo Insular de la Palma (daily-updated polygons of the extent of the lava flow) and the lava flow emplacement as simulated with VolcFlow, during the first seven days of the eruption, and supported by the location of the effusive foci provided by the IGN. The morphometric analysis of the satellite data has allowed us to estimate, assuming a non-Newtonian behaviour of the lava, the flows emitted, and their viscosities, using expressions based on the morphological dimensions, their advancing speed, and their density. The morphometric values thus obtained have been used as initial conditions for the numerical calibration, which has been done by minimising the Jaccard coefficient used as the objective function, but other geometric measures can be used as functionals to be minimised. Although we have designed and presented a task flow as a procedure to perform a dynamic numerical semiautomatic calibration over time of the rheological parameters necessary to simulate the day-to-day evolution of the lava flooding zone, based on the remote information recorded, for its validation we have carried out the search for the solution to the optimisation problem manually.
The results have allowed us to obtain a Jaccard coefficient between 85% and 60% with a calculation time, including calibration, of less than 7 days of simulated lava flow. Also, an emission rate of about 140 m3/s of lava flow has been estimated, during the first 24 h of the eruptive process, which decreased asymptotically to 60 m3/s. This value can be cross-checked with information from other remote sources using TADR. Viscosity varied from 8 × 106 Pa s, or a yield strength of 42 × 103 Pa, in the first hours, to 4 × 107 Pa s and 35 × 103 Pa, respectively, during the remainder of the seven days. In addition, the modelling allowed mapping the likely evolution of the lava flow fields until 27 September, with an acceptable lava height distribution for the highest values and a Jaccard coefficient of 85%, in order to determine the behaviour of the available response time, according to the computational cost, for the numerical estimation of the rheology and to generate real-time forecasts of the evolution.
This integration of satellite data with numerical model calibration for parametric estimation of the La Palma 2021 eruption holds great promise for providing a real-time response system to other future volcanic eruptions and providing the necessary information, mainly in the form of dynamic evolution maps, for efficient emergency preparedness and management.
How to cite: Paredes, C., Márquez, M. D., and Llorente, M.: Can we model lava flows faster than real-time to assist on a first volcanic emergency response?, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-35, https://doi.org/10.5194/egusphere-gc11-solidearth-35, 2023.
Remote sensing data and numerical simulation models of lava flows have been combined to assess the possibility of rapid, real-time response during the effusive crisis of the recent Cumbre Vieja (Sep 19 - December 13, 2021) eruptive episode. Here, we use the monitoring products openly distributed by COPERNICUS through the Emergency Management Service (EMSR546) and by the Cabildo Insular de la Palma (daily-updated polygons of the extent of the lava flow) and the lava flow emplacement as simulated with VolcFlow, during the first seven days of the eruption, and supported by the location of the effusive foci provided by the IGN. The morphometric analysis of the satellite data has allowed us to estimate, assuming a non-Newtonian behaviour of the lava, the flows emitted, and their viscosities, using expressions based on the morphological dimensions, their advancing speed, and their density. The morphometric values thus obtained have been used as initial conditions for the numerical calibration, which has been done by minimising the Jaccard coefficient used as the objective function, but other geometric measures can be used as functionals to be minimised. Although we have designed and presented a task flow as a procedure to perform a dynamic numerical semiautomatic calibration over time of the rheological parameters necessary to simulate the day-to-day evolution of the lava flooding zone, based on the remote information recorded, for its validation we have carried out the search for the solution to the optimisation problem manually.
The results have allowed us to obtain a Jaccard coefficient between 85% and 60% with a calculation time, including calibration, of less than 7 days of simulated lava flow. Also, an emission rate of about 140 m3/s of lava flow has been estimated, during the first 24 h of the eruptive process, which decreased asymptotically to 60 m3/s. This value can be cross-checked with information from other remote sources using TADR. Viscosity varied from 8 × 106 Pa s, or a yield strength of 42 × 103 Pa, in the first hours, to 4 × 107 Pa s and 35 × 103 Pa, respectively, during the remainder of the seven days. In addition, the modelling allowed mapping the likely evolution of the lava flow fields until 27 September, with an acceptable lava height distribution for the highest values and a Jaccard coefficient of 85%, in order to determine the behaviour of the available response time, according to the computational cost, for the numerical estimation of the rheology and to generate real-time forecasts of the evolution.
This integration of satellite data with numerical model calibration for parametric estimation of the La Palma 2021 eruption holds great promise for providing a real-time response system to other future volcanic eruptions and providing the necessary information, mainly in the form of dynamic evolution maps, for efficient emergency preparedness and management.
How to cite: Paredes, C., Márquez, M. D., and Llorente, M.: Can we model lava flows faster than real-time to assist on a first volcanic emergency response?, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-35, https://doi.org/10.5194/egusphere-gc11-solidearth-35, 2023.
GC11-solidearth-3 | Poster | Session 3
Three-dimensional, multiphase flow numerical models of phreatic volcanic eruptions.Tomaso Esposti Ongaro and Mattia de' Michieli Vitturi
Explosive volcanic eruptions are characterized by the ejection in the atmosphere of volcanic gases and fragments of magma and/or lithics at high temperature, pressure and velocity. They encompass a broad range of magnitudes, with volumes of ejecta spanning from less than 106 m3, to 109-1011 m3 of Plinian eruptions, up to the largest known volcanic events, able to erupt up to thousands of km3 of magma. Phreatic eruptions are among the smallest in this range; they do not involve the eruption of fresh magma, but are instead triggered by a sudden rise of pressure and temperature in a shallow magmatic-hydrothermal system. Despite their relatively small size, phreatic eruptions are frequent on Earth and difficult to anticipate, and represent therefore a significant hazard, testified by the recent eruptions in Tongariro’s Te-Maari crater (NZ, 2012), and during the tragic development of events in Ontake (JP, 2014) and Whakaari/White Island (NZ, 2019).
The main challenges of the numerical simulation of explosive volcanic phenomena have traditionally been identified in the complex fluid dynamics of polydisperse multiphase mixtures (with particle grains ranging from a few microns to metres) and in the extremely broad range of relevant dynamical scales characterizing compressible turbulent flows of gas and particles in the atmosphere. Three-dimensional, high-performance computer models based on different approximations of the multiphase flow theory have been designed to simulate the fluid dynamics of explosive eruptions, and to define hazard and impact scenarios. However, until now, it was difficult to quantify the uncertainty associated with numerical predictions.
We here discuss the present bottlenecks and challenges of the 3D modelling of phreatic volcanic eruptions in the quest for urgent definition of impact scenarios and probabilistic hazard assessment at Vulcano island (Aeolian archipelago, Italy). Exascale computing in these applications offers the opportunity to increase the complexity of the physical model (including new key processes as the flashing of liquid water), to describe the wide range of lithic fragments ejected during the eruption, to achieve unprecedently high spatial resolution at the source and close to the terrain, and to perform large ensembles of numerical simulations to quantify the epistemic uncertainty associated with the model initial and boundary conditions.
Challenges associated with the development, maintenance and porting on new HPC architectures of numerical models are finally discussed.
How to cite: Esposti Ongaro, T. and de' Michieli Vitturi, M.: Three-dimensional, multiphase flow numerical models of phreatic volcanic eruptions., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-3, https://doi.org/10.5194/egusphere-gc11-solidearth-3, 2023.
Explosive volcanic eruptions are characterized by the ejection in the atmosphere of volcanic gases and fragments of magma and/or lithics at high temperature, pressure and velocity. They encompass a broad range of magnitudes, with volumes of ejecta spanning from less than 106 m3, to 109-1011 m3 of Plinian eruptions, up to the largest known volcanic events, able to erupt up to thousands of km3 of magma. Phreatic eruptions are among the smallest in this range; they do not involve the eruption of fresh magma, but are instead triggered by a sudden rise of pressure and temperature in a shallow magmatic-hydrothermal system. Despite their relatively small size, phreatic eruptions are frequent on Earth and difficult to anticipate, and represent therefore a significant hazard, testified by the recent eruptions in Tongariro’s Te-Maari crater (NZ, 2012), and during the tragic development of events in Ontake (JP, 2014) and Whakaari/White Island (NZ, 2019).
The main challenges of the numerical simulation of explosive volcanic phenomena have traditionally been identified in the complex fluid dynamics of polydisperse multiphase mixtures (with particle grains ranging from a few microns to metres) and in the extremely broad range of relevant dynamical scales characterizing compressible turbulent flows of gas and particles in the atmosphere. Three-dimensional, high-performance computer models based on different approximations of the multiphase flow theory have been designed to simulate the fluid dynamics of explosive eruptions, and to define hazard and impact scenarios. However, until now, it was difficult to quantify the uncertainty associated with numerical predictions.
We here discuss the present bottlenecks and challenges of the 3D modelling of phreatic volcanic eruptions in the quest for urgent definition of impact scenarios and probabilistic hazard assessment at Vulcano island (Aeolian archipelago, Italy). Exascale computing in these applications offers the opportunity to increase the complexity of the physical model (including new key processes as the flashing of liquid water), to describe the wide range of lithic fragments ejected during the eruption, to achieve unprecedently high spatial resolution at the source and close to the terrain, and to perform large ensembles of numerical simulations to quantify the epistemic uncertainty associated with the model initial and boundary conditions.
Challenges associated with the development, maintenance and porting on new HPC architectures of numerical models are finally discussed.
How to cite: Esposti Ongaro, T. and de' Michieli Vitturi, M.: Three-dimensional, multiphase flow numerical models of phreatic volcanic eruptions., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-3, https://doi.org/10.5194/egusphere-gc11-solidearth-3, 2023.
GC11-solidearth-23 | Orals | Session 3
Modelling of extreme sea-level hazards: state-of-the-art and future challengesIva Tojčić, Cléa Denamiel, and Ivica Vilibić
Meteotsunami events – tsunami-like ocean waves driven by atmospheric disturbances – are, by nature, rare, specific to certain geographical regions and highly variable in time. Consequently, the coastal hazards due to these types of events are known to be difficult to forecast with state-of-the art numerical models presently applied around the world.
In order to help the local communities to better prepare for these destructive events (e.g., set temporary protection against flooding and waves, avoid swimming, etc.) and minimize the losses, the Croatian Meteotsunami Early Warning System (CMeEWS) has been recently implemented in the Adriatic Sea in the testing mode. The CMeEWS is mostly based on the Adriatic Sea and Coast (AdriSC) modelling suite and uses an innovative deterministic-stochastic approach for extreme sea-level event predictions. It provides meteotsunami hazard forecasts depending on (1) daily deterministic forecasts by coupled kilometer-scale atmosphere-ocean models, (2) atmospheric observations and (3) stochastic forecasts of extreme sea-level distributions at endangered locations derived with a surrogate model approach. Some of these steps require substantial computational resources and needs an optimized data flow which, at end, defines the operability of the service.
Here, the advantages but also the drawbacks of such an approach will be presented through several applications of the AdriSC modelling suite during meteotsunami events in the Adriatic Sea. The future challenges concerning meteotsunami extreme sea-level modelling will be discussed and some potential avenues to further develop the model skills will be considered.
How to cite: Tojčić, I., Denamiel, C., and Vilibić, I.: Modelling of extreme sea-level hazards: state-of-the-art and future challenges, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-23, https://doi.org/10.5194/egusphere-gc11-solidearth-23, 2023.
Meteotsunami events – tsunami-like ocean waves driven by atmospheric disturbances – are, by nature, rare, specific to certain geographical regions and highly variable in time. Consequently, the coastal hazards due to these types of events are known to be difficult to forecast with state-of-the art numerical models presently applied around the world.
In order to help the local communities to better prepare for these destructive events (e.g., set temporary protection against flooding and waves, avoid swimming, etc.) and minimize the losses, the Croatian Meteotsunami Early Warning System (CMeEWS) has been recently implemented in the Adriatic Sea in the testing mode. The CMeEWS is mostly based on the Adriatic Sea and Coast (AdriSC) modelling suite and uses an innovative deterministic-stochastic approach for extreme sea-level event predictions. It provides meteotsunami hazard forecasts depending on (1) daily deterministic forecasts by coupled kilometer-scale atmosphere-ocean models, (2) atmospheric observations and (3) stochastic forecasts of extreme sea-level distributions at endangered locations derived with a surrogate model approach. Some of these steps require substantial computational resources and needs an optimized data flow which, at end, defines the operability of the service.
Here, the advantages but also the drawbacks of such an approach will be presented through several applications of the AdriSC modelling suite during meteotsunami events in the Adriatic Sea. The future challenges concerning meteotsunami extreme sea-level modelling will be discussed and some potential avenues to further develop the model skills will be considered.
How to cite: Tojčić, I., Denamiel, C., and Vilibić, I.: Modelling of extreme sea-level hazards: state-of-the-art and future challenges, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-23, https://doi.org/10.5194/egusphere-gc11-solidearth-23, 2023.
GC11-solidearth-5 | Poster | Session 3
Hydro-mechanical modeling of injection-induced seismicity at the Deep Heat Mining Project of Basel, SwitzerlandAuregan Boyet, Silvia De Simone, and Víctor Vilarrasa
Fluid injection in subsurface reservoirs often induces seismicity, which is a limiting factor in the deployment of geo-energies, as it is the case for Enhanced Geothermal Systems (EGS). EGS are commonly deep granitic reservoirs subject to hydraulic stimulation in order to enhance the fracture permeability and consequently the heat production. Injection-induced seismicity occurs also after the stop of injection, and in many cases the largest earthquakes occur after the shut-in. The counterintuitive post-injection large magnitude seismicity is still not well understood and its modelling is necessary to improve the understanding of the processes triggering the seismicity. Pressure-driven processes, as pore pressure increase and poroelastic stress/strain variations, have been identified as triggers of seismicity, together with stress interactions, thermal disparities and geomechanical interactions. We design a coupled hydro-mechanical 2D model of the well-known case of post-injection induced seismicity of Basel EGS (Deep Heat Mining Project at Basel, Switzerland, 2006). We use CODE_BRIGHT, a finite element method software able to perform monolithic coupled thermo-hydro-mechanical analysis in geological media. The faults respond to a Mohr-Coulomb failure criterion with strain weakening and dilatancy, which allow to simulate fault reactivation and its aperture variation. The model is able to reproduce the pressure and stress variations, and the consequent fault reactivations through the simulations. The Basel EGS has been well documented and its characteristics are available. We are able to reproduce the spatio-temporal induced seismicity. Yet, our current numerical method takes long computational time. To speed up simulations, we simplify the model geometry by grouping faults that yield similar static stress transfer, computed with the code Coulomb3, which solves an analytical solution to compute stress changes caused by fault slip. The combination of numerical with analytical solutions is an effective way of obtaining faster computing models. By simultaneously assimilating monitoring data in real-time with an efficient computing model would enable a better understanding of the fluid-injection effects on the stability of the reservoir, and potentially the mitigation of the induced seismicity.
How to cite: Boyet, A., De Simone, S., and Vilarrasa, V.: Hydro-mechanical modeling of injection-induced seismicity at the Deep Heat Mining Project of Basel, Switzerland, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-5, https://doi.org/10.5194/egusphere-gc11-solidearth-5, 2023.
Fluid injection in subsurface reservoirs often induces seismicity, which is a limiting factor in the deployment of geo-energies, as it is the case for Enhanced Geothermal Systems (EGS). EGS are commonly deep granitic reservoirs subject to hydraulic stimulation in order to enhance the fracture permeability and consequently the heat production. Injection-induced seismicity occurs also after the stop of injection, and in many cases the largest earthquakes occur after the shut-in. The counterintuitive post-injection large magnitude seismicity is still not well understood and its modelling is necessary to improve the understanding of the processes triggering the seismicity. Pressure-driven processes, as pore pressure increase and poroelastic stress/strain variations, have been identified as triggers of seismicity, together with stress interactions, thermal disparities and geomechanical interactions. We design a coupled hydro-mechanical 2D model of the well-known case of post-injection induced seismicity of Basel EGS (Deep Heat Mining Project at Basel, Switzerland, 2006). We use CODE_BRIGHT, a finite element method software able to perform monolithic coupled thermo-hydro-mechanical analysis in geological media. The faults respond to a Mohr-Coulomb failure criterion with strain weakening and dilatancy, which allow to simulate fault reactivation and its aperture variation. The model is able to reproduce the pressure and stress variations, and the consequent fault reactivations through the simulations. The Basel EGS has been well documented and its characteristics are available. We are able to reproduce the spatio-temporal induced seismicity. Yet, our current numerical method takes long computational time. To speed up simulations, we simplify the model geometry by grouping faults that yield similar static stress transfer, computed with the code Coulomb3, which solves an analytical solution to compute stress changes caused by fault slip. The combination of numerical with analytical solutions is an effective way of obtaining faster computing models. By simultaneously assimilating monitoring data in real-time with an efficient computing model would enable a better understanding of the fluid-injection effects on the stability of the reservoir, and potentially the mitigation of the induced seismicity.
How to cite: Boyet, A., De Simone, S., and Vilarrasa, V.: Hydro-mechanical modeling of injection-induced seismicity at the Deep Heat Mining Project of Basel, Switzerland, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-5, https://doi.org/10.5194/egusphere-gc11-solidearth-5, 2023.
GC11-solidearth-6 | Poster | Session 3
A computationally efficient numerical model to understand potential CO2 leakage risk within gigatonne scale geologic storageIman R. Kivi, Roman Y. Makhnenko, Curtis M. Oldenburg, Jonny Rutqvist, and Victor Vilarrasa
The majority of available climate change mitigation pathways, targeting net-zero CO2 emissions by 2050, rely heavily on the permanent storage of CO2 in deep geologic formations at the gigatonne scale. The spatial and temporal scales of interest to geologic carbon storage (GCS) raise concerns about CO2 leakage to shallow sediments or back into the atmosphere. The assessment of CO2 storage performance is subject to huge computational costs of numerically simulating CO2 migration across geologic layers at the basin scale and is therefore restricted in practice to multi-century periods. Here, we present a computationally affordable and yet physically sound model to understand the likelihood of CO2 leakage over geologic time scales (millions of years) (Kivi et al., 2022). The model accounts for vertical two-phase flow and transport of CO2 and brine in a multi-layered system, comprising a sequence of aquifers and sealing rocks from the crystalline basement up to the surface (a total thickness of 1600 m), representative of a sedimentary basin. We argue that the model is capable of capturing the dynamics of CO2 leakage during basin-wide storage because the lateral advancement of CO2 plume injected from a dense grid of wellbores transforms into buoyant vertical rise within a short period after shut-in. A critical step in the proposed model is its initialization, which should reproduce the average CO2 saturation column and pressure profiles. We initialize the model by injecting CO2 at a constant overpressure into an upper lateral portion of the target aquifer while the bottom boundary is permeable to brine, resembling brine displacement by CO2 plume or leakage at basin margins. The optimum model setting can be achieved by adjusting the brine leakage parameter through calibration. We solve the governing equations using the finite element code CODE_BRIGHT. Discretizing the model with 7,100 quadrilateral elements and using an adaptive time-stepping scheme, the CPU time for the simulation of CO2 containment in the subsurface for 1 million years is around 140 hours on a Xeon CPU of speed 2.5 GHz. The obtained results suggest that the upward CO2 flow in free phase is strongly hindered by the sequence of caprocks even if they are pervasively fractured. CO2 leakage towards the surface is governed by the intrinsically slow molecular diffusion process, featuring aqueous CO2 transport rates as low as 1 meter per several thousand years. The model shows that GCS in multi-layered geologic settings is extremely unlikely to be associated with leakage, implying that GCS is a secure carbon removal technology.
Reference
Kivi, I. R., Makhnenko, R. Y., Oldenburg, C. M., Rutqvist, J., & Vilarrasa, V. (2022). Multi-layered systems for permanent geologic storage of CO2 at the gigatonne scale. Geophysical Research Letters, 49, e2022GL100443. https://doi.org/10.1029/2022GL100443
How to cite: Kivi, I. R., Makhnenko, R. Y., Oldenburg, C. M., Rutqvist, J., and Vilarrasa, V.: A computationally efficient numerical model to understand potential CO2 leakage risk within gigatonne scale geologic storage, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-6, https://doi.org/10.5194/egusphere-gc11-solidearth-6, 2023.
The majority of available climate change mitigation pathways, targeting net-zero CO2 emissions by 2050, rely heavily on the permanent storage of CO2 in deep geologic formations at the gigatonne scale. The spatial and temporal scales of interest to geologic carbon storage (GCS) raise concerns about CO2 leakage to shallow sediments or back into the atmosphere. The assessment of CO2 storage performance is subject to huge computational costs of numerically simulating CO2 migration across geologic layers at the basin scale and is therefore restricted in practice to multi-century periods. Here, we present a computationally affordable and yet physically sound model to understand the likelihood of CO2 leakage over geologic time scales (millions of years) (Kivi et al., 2022). The model accounts for vertical two-phase flow and transport of CO2 and brine in a multi-layered system, comprising a sequence of aquifers and sealing rocks from the crystalline basement up to the surface (a total thickness of 1600 m), representative of a sedimentary basin. We argue that the model is capable of capturing the dynamics of CO2 leakage during basin-wide storage because the lateral advancement of CO2 plume injected from a dense grid of wellbores transforms into buoyant vertical rise within a short period after shut-in. A critical step in the proposed model is its initialization, which should reproduce the average CO2 saturation column and pressure profiles. We initialize the model by injecting CO2 at a constant overpressure into an upper lateral portion of the target aquifer while the bottom boundary is permeable to brine, resembling brine displacement by CO2 plume or leakage at basin margins. The optimum model setting can be achieved by adjusting the brine leakage parameter through calibration. We solve the governing equations using the finite element code CODE_BRIGHT. Discretizing the model with 7,100 quadrilateral elements and using an adaptive time-stepping scheme, the CPU time for the simulation of CO2 containment in the subsurface for 1 million years is around 140 hours on a Xeon CPU of speed 2.5 GHz. The obtained results suggest that the upward CO2 flow in free phase is strongly hindered by the sequence of caprocks even if they are pervasively fractured. CO2 leakage towards the surface is governed by the intrinsically slow molecular diffusion process, featuring aqueous CO2 transport rates as low as 1 meter per several thousand years. The model shows that GCS in multi-layered geologic settings is extremely unlikely to be associated with leakage, implying that GCS is a secure carbon removal technology.
Reference
Kivi, I. R., Makhnenko, R. Y., Oldenburg, C. M., Rutqvist, J., & Vilarrasa, V. (2022). Multi-layered systems for permanent geologic storage of CO2 at the gigatonne scale. Geophysical Research Letters, 49, e2022GL100443. https://doi.org/10.1029/2022GL100443
How to cite: Kivi, I. R., Makhnenko, R. Y., Oldenburg, C. M., Rutqvist, J., and Vilarrasa, V.: A computationally efficient numerical model to understand potential CO2 leakage risk within gigatonne scale geologic storage, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-6, https://doi.org/10.5194/egusphere-gc11-solidearth-6, 2023.
GC11-solidearth-24 | Poster | Session 3
Tsunami risk management in the Exascale Era:Global advances and the European standpointCláudia Reis, André R. Barbosa, Denis Istrati, Stephane Clain, Rui Ferreira, Luis Matias, and Erin Wirth
Regional and local tsunami sources are a cliché of scientific disaggregation. From the physical perspective, despite emerging studies on cascading hazard and risk, hazard characterization often sees the tsunami as an individual event without addressing the effects of the primary hazard (typically a high-magnitude earthquake) that triggered the tsunami. Moreover, tsunami effects are partitioned into single processes: hydraulic effects or induced effects, such as debris transport, which is a representative approach often assumed when treating complex phenomena. From a technical perspective, describing cascading hazards and translating them into a composite loading pattern for natural and built environments is challenging, and the difficulty increases exponentially when fluid-soil-interactions are considered. From a modeling perspective, physical and numerical simulations are employed to complement scarce databases of extreme tsunami events. However, the level of modeling sophistication deemed necessary to reproduce such complex phenomena is elevated and there are uncertainties associated with natural phenomena and their modelling, ranging from the genesis of the tsunami to structural and community response. The number and influencing potential of uncertainties pose an extraordinary concern when developing mitigation measures. From a risk management perspective, cascading natural and anthropogenic hazards constitutes a challenge for combining safety requirements with financial, social, and ecological concerns. Risk management can benefit from strengthening the ties between natural hazards and engineering practitioners, linking science and industry, and promoting dialogue between risk analysts and policy-makers.
Ultimately, risk management requires heterogeneous data and information from real and synthetic origins. Yet, the quality of data used for risk management may often depend on the computational resources (in terms of performance, energy, and storage capacity) needed to simulate complex multi-scale and multi-physics phenomena, as well as to analyze large data sets. For example, the quality of the numerical solutions is often dependent on the amount of data used to calibrate the models and the runtime of the models needs to be aligned with time constraints (ex.: faster than real time tsunami simulations for early warning systems). The North American platform Hazus is capable of producing risk maps. In the European risk assessment, there is a lack of integration and interaction of results from GEM and SERA, and TSUMAPS-NEAM projects, intended to develop seismic and tsunami hazard studies, respectively. The computational modeling aids in the advancement of scientific knowledge by aggregating the numerous factors involved and their translation to tsunami risk management policies.
A global trend in geosciences and engineering is to develop sophisticated numerical schemes and to build computational facilities that can solve them, thereby aiming to reduce uncertainty levels and preparing the scientific (r)-evolution for the so-called Exascale Era. The present work aims to gather multidisciplinary perspectives on a discussion about: 1) challenges to overcome on tsunami risk management, such as sophistication of earthquake and tsunami numerical schemes; 2) uncertainty-awareness and future needs to develop unanimous and systematic measures to reduce uncertainties associated with geophysical and engineering processes; 3) pros and cons of using HPC resources towards safety and operational performance levels; and 4) applicability to critical infrastructures.
How to cite: Reis, C., Barbosa, A. R., Istrati, D., Clain, S., Ferreira, R., Matias, L., and Wirth, E.: Tsunami risk management in the Exascale Era:Global advances and the European standpoint, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-24, https://doi.org/10.5194/egusphere-gc11-solidearth-24, 2023.
Regional and local tsunami sources are a cliché of scientific disaggregation. From the physical perspective, despite emerging studies on cascading hazard and risk, hazard characterization often sees the tsunami as an individual event without addressing the effects of the primary hazard (typically a high-magnitude earthquake) that triggered the tsunami. Moreover, tsunami effects are partitioned into single processes: hydraulic effects or induced effects, such as debris transport, which is a representative approach often assumed when treating complex phenomena. From a technical perspective, describing cascading hazards and translating them into a composite loading pattern for natural and built environments is challenging, and the difficulty increases exponentially when fluid-soil-interactions are considered. From a modeling perspective, physical and numerical simulations are employed to complement scarce databases of extreme tsunami events. However, the level of modeling sophistication deemed necessary to reproduce such complex phenomena is elevated and there are uncertainties associated with natural phenomena and their modelling, ranging from the genesis of the tsunami to structural and community response. The number and influencing potential of uncertainties pose an extraordinary concern when developing mitigation measures. From a risk management perspective, cascading natural and anthropogenic hazards constitutes a challenge for combining safety requirements with financial, social, and ecological concerns. Risk management can benefit from strengthening the ties between natural hazards and engineering practitioners, linking science and industry, and promoting dialogue between risk analysts and policy-makers.
Ultimately, risk management requires heterogeneous data and information from real and synthetic origins. Yet, the quality of data used for risk management may often depend on the computational resources (in terms of performance, energy, and storage capacity) needed to simulate complex multi-scale and multi-physics phenomena, as well as to analyze large data sets. For example, the quality of the numerical solutions is often dependent on the amount of data used to calibrate the models and the runtime of the models needs to be aligned with time constraints (ex.: faster than real time tsunami simulations for early warning systems). The North American platform Hazus is capable of producing risk maps. In the European risk assessment, there is a lack of integration and interaction of results from GEM and SERA, and TSUMAPS-NEAM projects, intended to develop seismic and tsunami hazard studies, respectively. The computational modeling aids in the advancement of scientific knowledge by aggregating the numerous factors involved and their translation to tsunami risk management policies.
A global trend in geosciences and engineering is to develop sophisticated numerical schemes and to build computational facilities that can solve them, thereby aiming to reduce uncertainty levels and preparing the scientific (r)-evolution for the so-called Exascale Era. The present work aims to gather multidisciplinary perspectives on a discussion about: 1) challenges to overcome on tsunami risk management, such as sophistication of earthquake and tsunami numerical schemes; 2) uncertainty-awareness and future needs to develop unanimous and systematic measures to reduce uncertainties associated with geophysical and engineering processes; 3) pros and cons of using HPC resources towards safety and operational performance levels; and 4) applicability to critical infrastructures.
How to cite: Reis, C., Barbosa, A. R., Istrati, D., Clain, S., Ferreira, R., Matias, L., and Wirth, E.: Tsunami risk management in the Exascale Era:Global advances and the European standpoint, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-24, https://doi.org/10.5194/egusphere-gc11-solidearth-24, 2023.
GC11-solidearth-11 | Poster | Session 3
Combining High-Performance Computing and Neural Networks for Tsunami Maximum Height and Arrival Time ForecastsJuan Francisco Rodríguez Gálvez, Jorge Macías Sáncez, Manuel Jesús Castro Díaz, Marc de la Asunción, and Carlos Sánchez-Linares
Operational Tsunami Early Warning Systems (TEWS) are crucial for mitigation and highly reducing the impact of tsunamis on coastal communities worldwide. In the North-East Atlantic, the Mediterranean, and connected Seas (NEAM) region, these systems have historically utilized Decision Matrices for this purpose. The very short time between tsunami generation and landfall in this region makes it extremely challenging to use real-time simulations to produce more precise alert levels and the only way to include a computational component in the alert was to use precomputed databases. Nevertheless, in recent years, computing times for a single scenario have been progressively reduced to a few minutes or even seconds depending on the computational resources available. In particular, the EDANYA group at the University of Málaga, Spain, has focused on this topic and developed the GPU code Tsunami-HySEA for Faster Than Real Time (FTRT) tsunami simulations. This code has been implemented and tested in TEWS of several countries (such as Spain, Italy, and Chile) and has undergone extensive testing, verification and validation.
In this study, we propose the use of neural networks (NN) to predict the maximum height and arrival time of tsunamis in the context of TEWS. The advantage of this approach is that the inference time required is negligible (less than one second) and that this can be done in a simple laptop. This allows to consider uncertain input information in the data and still providing the results in some seconds. As tsunamis are rare events, numerical simulations using the Tsunami-HySEA are used to train the NN model. This part of the workflow requires producing a large amount of simulations and large HPC computational resources must be used. We utilized the Tsunami-HySEA code and the Spanish Network for Supercomputing (RES), to apply neural networks (NN) and obtain the numerical results.
Machine learning (ML) techniques have gained widespread adoption and are being applied in all areas of research, including tsunami modeling. In this work, we employ Multi-Layer Perceptron (MLP) neural networks to forecast the maximum height and arrival time of tsunamis at specific locations along the Chipiona-Cádiz coast in Southwestern Spain. In the present work, initially several individual models are trained and we show that they provide accurate results. Then ensemble techniques, which combine multiple single models in order to reduce variance, are explored. The ensemble models often produce improved predictions.
The proposed methodology is tested for tsunamis generated by earthquakes on the Horseshoe fault. The goal is to develop a neural network (NN) model for predicting the maximum height and arrival time of such tsunamis at multiple coastal locations simultaneously. The results of our analysis show that deep learning is a promising approach for this task. The proposed NN models produce errors of less than 6 cm for the maximum wave height and less then 212 s for the arrival time for tsunamis generated on the Horseshoe fault in the Northeastern Atlantic.
How to cite: Rodríguez Gálvez, J. F., Macías Sáncez, J., Castro Díaz, M. J., de la Asunción, M., and Sánchez-Linares, C.: Combining High-Performance Computing and Neural Networks for Tsunami Maximum Height and Arrival Time Forecasts, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-11, https://doi.org/10.5194/egusphere-gc11-solidearth-11, 2023.
Operational Tsunami Early Warning Systems (TEWS) are crucial for mitigation and highly reducing the impact of tsunamis on coastal communities worldwide. In the North-East Atlantic, the Mediterranean, and connected Seas (NEAM) region, these systems have historically utilized Decision Matrices for this purpose. The very short time between tsunami generation and landfall in this region makes it extremely challenging to use real-time simulations to produce more precise alert levels and the only way to include a computational component in the alert was to use precomputed databases. Nevertheless, in recent years, computing times for a single scenario have been progressively reduced to a few minutes or even seconds depending on the computational resources available. In particular, the EDANYA group at the University of Málaga, Spain, has focused on this topic and developed the GPU code Tsunami-HySEA for Faster Than Real Time (FTRT) tsunami simulations. This code has been implemented and tested in TEWS of several countries (such as Spain, Italy, and Chile) and has undergone extensive testing, verification and validation.
In this study, we propose the use of neural networks (NN) to predict the maximum height and arrival time of tsunamis in the context of TEWS. The advantage of this approach is that the inference time required is negligible (less than one second) and that this can be done in a simple laptop. This allows to consider uncertain input information in the data and still providing the results in some seconds. As tsunamis are rare events, numerical simulations using the Tsunami-HySEA are used to train the NN model. This part of the workflow requires producing a large amount of simulations and large HPC computational resources must be used. We utilized the Tsunami-HySEA code and the Spanish Network for Supercomputing (RES), to apply neural networks (NN) and obtain the numerical results.
Machine learning (ML) techniques have gained widespread adoption and are being applied in all areas of research, including tsunami modeling. In this work, we employ Multi-Layer Perceptron (MLP) neural networks to forecast the maximum height and arrival time of tsunamis at specific locations along the Chipiona-Cádiz coast in Southwestern Spain. In the present work, initially several individual models are trained and we show that they provide accurate results. Then ensemble techniques, which combine multiple single models in order to reduce variance, are explored. The ensemble models often produce improved predictions.
The proposed methodology is tested for tsunamis generated by earthquakes on the Horseshoe fault. The goal is to develop a neural network (NN) model for predicting the maximum height and arrival time of such tsunamis at multiple coastal locations simultaneously. The results of our analysis show that deep learning is a promising approach for this task. The proposed NN models produce errors of less than 6 cm for the maximum wave height and less then 212 s for the arrival time for tsunamis generated on the Horseshoe fault in the Northeastern Atlantic.
How to cite: Rodríguez Gálvez, J. F., Macías Sáncez, J., Castro Díaz, M. J., de la Asunción, M., and Sánchez-Linares, C.: Combining High-Performance Computing and Neural Networks for Tsunami Maximum Height and Arrival Time Forecasts, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-11, https://doi.org/10.5194/egusphere-gc11-solidearth-11, 2023.
GC11-solidearth-13 | Poster | Session 3
Ash fallout long term probabilistic volcanic hazard assessment for Neapolitan volcanoes: an example of what Earth Scientists can do with HPC resourcesManuel Stocchi, Silvia Massaro, Beatriz Martínz Montesinos, Laura Sandri, Jacopo Selva, Roberto Sulpizio, Biagio Giaccio, Massimiliano Moscatelli, Edoardo Peronace, Marco Nocentini, Roberto Isaia, Manuel Titos Luzón, Pierfrancesco Dellino, Giuseppe Naso, and Antonio Costa
The creation of hazard maps relative to volcanic phenomena requires taking into account the intrinsic complexity and variability of eruptions. Here we present an example of how HPC can allow producing a high resolution multi-source probabilistic hazard assessment due to tephra fallout over a domain covering Southern Italy.
The three active volcanoes in the Neapolitan area, Somma-Vesuvius, Campi Flegrei and Ischia, were considered as volcanic sources. For each one, we explored three explosive size classes (Small, Medium and Large) for Somma Vesuvius and Campi Flegrei, and one explosive size class (Large) for Ischia. For each size class, we performed 1500 numerical simulations of ash dispersion (total of 10500) using the Fall3D (V8.0) model over a computational domain covering Southern Italy with a 0.03° ⨉ 0.03° (~3 km ⨉ 3 km) resolution. Within each size class, the eruptive parameters have been randomly sampled from well-suited probability distributions and with different meteorological conditions, obtained by randomly sampling a day between 1990 and 2020 and retrieving the relative data from the ECMWF ERA5 database. This allowed exploring the intra-class variability and to quantify aleatoric uncertainty. The results of these simulations have been post-processed with a statistical approach by assigning a weight to each eruption (based on its eruption magnitude) and the annual eruption rate of each size class. For the case of Campi Flegrei, the variability in the eruptive vent position has also been explored by constructing a grid of possible vent locations with different spatial probability. By merging the results obtained for each source and size class we produced a portfolio of hazard maps showing the expected mean annual frequency of overcoming selected thresholds in ground tephra load. A disaggregation analysis has also been performed in order to understand which particular source and/or size class had the greater impact on a particular area.
The completion of this work, considering both numerical simulations and the statistical elaboration of the results has required a total of more than 5000 core hours and the processing of more than 2TB of data, an effort that wouldn’t have been possible without the access to high level HPC resources.
How to cite: Stocchi, M., Massaro, S., Martínz Montesinos, B., Sandri, L., Selva, J., Sulpizio, R., Giaccio, B., Moscatelli, M., Peronace, E., Nocentini, M., Isaia, R., Titos Luzón, M., Dellino, P., Naso, G., and Costa, A.: Ash fallout long term probabilistic volcanic hazard assessment for Neapolitan volcanoes: an example of what Earth Scientists can do with HPC resources, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-13, https://doi.org/10.5194/egusphere-gc11-solidearth-13, 2023.
The creation of hazard maps relative to volcanic phenomena requires taking into account the intrinsic complexity and variability of eruptions. Here we present an example of how HPC can allow producing a high resolution multi-source probabilistic hazard assessment due to tephra fallout over a domain covering Southern Italy.
The three active volcanoes in the Neapolitan area, Somma-Vesuvius, Campi Flegrei and Ischia, were considered as volcanic sources. For each one, we explored three explosive size classes (Small, Medium and Large) for Somma Vesuvius and Campi Flegrei, and one explosive size class (Large) for Ischia. For each size class, we performed 1500 numerical simulations of ash dispersion (total of 10500) using the Fall3D (V8.0) model over a computational domain covering Southern Italy with a 0.03° ⨉ 0.03° (~3 km ⨉ 3 km) resolution. Within each size class, the eruptive parameters have been randomly sampled from well-suited probability distributions and with different meteorological conditions, obtained by randomly sampling a day between 1990 and 2020 and retrieving the relative data from the ECMWF ERA5 database. This allowed exploring the intra-class variability and to quantify aleatoric uncertainty. The results of these simulations have been post-processed with a statistical approach by assigning a weight to each eruption (based on its eruption magnitude) and the annual eruption rate of each size class. For the case of Campi Flegrei, the variability in the eruptive vent position has also been explored by constructing a grid of possible vent locations with different spatial probability. By merging the results obtained for each source and size class we produced a portfolio of hazard maps showing the expected mean annual frequency of overcoming selected thresholds in ground tephra load. A disaggregation analysis has also been performed in order to understand which particular source and/or size class had the greater impact on a particular area.
The completion of this work, considering both numerical simulations and the statistical elaboration of the results has required a total of more than 5000 core hours and the processing of more than 2TB of data, an effort that wouldn’t have been possible without the access to high level HPC resources.
How to cite: Stocchi, M., Massaro, S., Martínz Montesinos, B., Sandri, L., Selva, J., Sulpizio, R., Giaccio, B., Moscatelli, M., Peronace, E., Nocentini, M., Isaia, R., Titos Luzón, M., Dellino, P., Naso, G., and Costa, A.: Ash fallout long term probabilistic volcanic hazard assessment for Neapolitan volcanoes: an example of what Earth Scientists can do with HPC resources, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-13, https://doi.org/10.5194/egusphere-gc11-solidearth-13, 2023.
GC11-solidearth-17 | Poster | Session 3
GALES: a general-purpose multi-physics FEM codeDeepak Garg, Paolo Papale, Antonella Longo, and Chiara Montagna
We present a versatile open-source FEM-based multi-physics numerical code GALES for volcanic and general-purpose problems. The code is developed/applied to a suite of problems in magma and volcano dynamics. The software is written in modern C++ and is parallelized using OpenMPI and Trilinos libraries. GALES comprises several advanced solvers for 2D and 3D problems dealing with heat transfer, compressible to incompressible mono and multi-fluid flows in Eulerian and Arbitrary Lagrangian-Eulerian (ALE) formulations, Elastic (static and dynamic) deformation of solids and fluid-solid interaction. Fluid solvers account for both Newtonian and non-Newtonian rheologies. Solvers account for transient as well as steady problems. Non-linear problems are linearized using Newton's method. All solvers have been thoroughly verified and validated on standard benchmarks. The software is regularly used for high-performance computing (HPC) on our local cluster machines at INGV, Pisa, Italy. Recently, we analyzed the performance of the code by a series of strong-scaling tests conducted on the Marenostrum supercomputer at the Barcelona Supercomputing Centre (BSC) up to 12288 cores. The results revealed a computational speedup close to ideal and above satisfactory levels as long as the element/core ratio is sufficiently large, making GALES an excellent choice for utilizing HPC resources efficiently for complex magma flow and rock dynamics problems.
How to cite: Garg, D., Papale, P., Longo, A., and Montagna, C.: GALES: a general-purpose multi-physics FEM code, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-17, https://doi.org/10.5194/egusphere-gc11-solidearth-17, 2023.
We present a versatile open-source FEM-based multi-physics numerical code GALES for volcanic and general-purpose problems. The code is developed/applied to a suite of problems in magma and volcano dynamics. The software is written in modern C++ and is parallelized using OpenMPI and Trilinos libraries. GALES comprises several advanced solvers for 2D and 3D problems dealing with heat transfer, compressible to incompressible mono and multi-fluid flows in Eulerian and Arbitrary Lagrangian-Eulerian (ALE) formulations, Elastic (static and dynamic) deformation of solids and fluid-solid interaction. Fluid solvers account for both Newtonian and non-Newtonian rheologies. Solvers account for transient as well as steady problems. Non-linear problems are linearized using Newton's method. All solvers have been thoroughly verified and validated on standard benchmarks. The software is regularly used for high-performance computing (HPC) on our local cluster machines at INGV, Pisa, Italy. Recently, we analyzed the performance of the code by a series of strong-scaling tests conducted on the Marenostrum supercomputer at the Barcelona Supercomputing Centre (BSC) up to 12288 cores. The results revealed a computational speedup close to ideal and above satisfactory levels as long as the element/core ratio is sufficiently large, making GALES an excellent choice for utilizing HPC resources efficiently for complex magma flow and rock dynamics problems.
How to cite: Garg, D., Papale, P., Longo, A., and Montagna, C.: GALES: a general-purpose multi-physics FEM code, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-17, https://doi.org/10.5194/egusphere-gc11-solidearth-17, 2023.
GC11-solidearth-18 | Poster | Session 3
HPC in Rapid Disaster Response: Numerical simulations for hazard assessment of a potential dam breach of the Kyiv cistern reservoir, UkraineCarlos Sánchez-Linares, Jorge Macías, and Manuel J. Castro Díaz
How to cite: Sánchez-Linares, C., Macías, J., and Castro Díaz, M. J.: HPC in Rapid Disaster Response: Numerical simulations for hazard assessment of a potential dam breach of the Kyiv cistern reservoir, Ukraine, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-18, https://doi.org/10.5194/egusphere-gc11-solidearth-18, 2023.
How to cite: Sánchez-Linares, C., Macías, J., and Castro Díaz, M. J.: HPC in Rapid Disaster Response: Numerical simulations for hazard assessment of a potential dam breach of the Kyiv cistern reservoir, Ukraine, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-18, https://doi.org/10.5194/egusphere-gc11-solidearth-18, 2023.
GC11-solidearth-19 | Poster | Session 3
Physics-informed Neural Networks to Simulate Subsurface Fluid Flow in Fractured MediaLinus Walter, Francesco Parisio, Qingkai Kong, Sara Hanson-Hedgecock, and Víctor Vilarrasa
Reliable reservoir characterization of the strata, fractures, and hydraulic properties is needed to determine the energy storage capacity of geothermal systems. We apply the state-of-the-art Physics-Informed Neural Networks (PINN) to model subsurface flow in a geothermal reservoir. A PINN can incorporate any physical laws that can be described by partial differential equations. We obtain a ground truth dataset by running a virtual pumping-well test in the numerical code “Code_Bright”. This model consists of a low-permeability rock matrix, intersected by high-permeability fractures. We approximate the reservoir permeability with an Artificial Neural Network (ANN) denoted by . Secondly, we model the fluid pressure evolution with the PINN by informing it about the experimental well-testing data. Since observation wells are sparse in space (only the injection well in our case), we feed into a hydraulic mass balance equation. The residual of this equation enforces the loss function of for random collocation points inside the domain. Our results indicate that the ANN is able to approximate even for a high permeability contrast. In addition, the successful interpolation of proves the PINN is a promising method for matching field data with physical laws. In contrast to numerical models PINNs shift the computational efforts toward the training, while reducing the resources needed for the forward evaluation. Nevertheless, training a 3D reservoir model can hardly be achieved on an ordinary workstation since the training data may include several millions of entries. In addition, computational costs increase due to the inclusion of multiphysics processes in the PINN. We plan to prepare the PINN model for training using parallelized GPUs to significantly increase the training speed of the ANNs.
How to cite: Walter, L., Parisio, F., Kong, Q., Hanson-Hedgecock, S., and Vilarrasa, V.: Physics-informed Neural Networks to Simulate Subsurface Fluid Flow in Fractured Media, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-19, https://doi.org/10.5194/egusphere-gc11-solidearth-19, 2023.
Reliable reservoir characterization of the strata, fractures, and hydraulic properties is needed to determine the energy storage capacity of geothermal systems. We apply the state-of-the-art Physics-Informed Neural Networks (PINN) to model subsurface flow in a geothermal reservoir. A PINN can incorporate any physical laws that can be described by partial differential equations. We obtain a ground truth dataset by running a virtual pumping-well test in the numerical code “Code_Bright”. This model consists of a low-permeability rock matrix, intersected by high-permeability fractures. We approximate the reservoir permeability with an Artificial Neural Network (ANN) denoted by . Secondly, we model the fluid pressure evolution with the PINN by informing it about the experimental well-testing data. Since observation wells are sparse in space (only the injection well in our case), we feed into a hydraulic mass balance equation. The residual of this equation enforces the loss function of for random collocation points inside the domain. Our results indicate that the ANN is able to approximate even for a high permeability contrast. In addition, the successful interpolation of proves the PINN is a promising method for matching field data with physical laws. In contrast to numerical models PINNs shift the computational efforts toward the training, while reducing the resources needed for the forward evaluation. Nevertheless, training a 3D reservoir model can hardly be achieved on an ordinary workstation since the training data may include several millions of entries. In addition, computational costs increase due to the inclusion of multiphysics processes in the PINN. We plan to prepare the PINN model for training using parallelized GPUs to significantly increase the training speed of the ANNs.
How to cite: Walter, L., Parisio, F., Kong, Q., Hanson-Hedgecock, S., and Vilarrasa, V.: Physics-informed Neural Networks to Simulate Subsurface Fluid Flow in Fractured Media, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-19, https://doi.org/10.5194/egusphere-gc11-solidearth-19, 2023.
GC11-solidearth-26 | Poster | Session 3
Coupled permafrost-groundwater simulation applied to a spent fuel nuclear waste repositoryThomas Zwinger, Denis Cohen, and Lasse Koskinen
The Olkiluoto spent nuclear fuel repository in Eurajoki, Finland, is the first one on the planet that will go operational in foreseeable future. The long-term safety of this repository with respect to future ice-age conditions and the consequently occurring permafrost and altered groundwater flow conditions needs to be evaluated. To this end, a Darcy model for saturated aquifer groundwater flow combined with a heat transfer module accounting for phase change (i.e. freezing) as well as a solute and a bedrock deformation model have been implemented in the multi-physics Finite Element Method code Elmer. The set of equations is based on continuum thermo-mechanic principles. The application of this newly developed model to Olkiluoto aims to simulate the evolution of permafrost thickness, talik development, and groundwater flow and salinity changes at and around the repository during the next 120,000 years. This is achieved by solving the aforementioned model components in a coupled way in three dimensions on the mesh that discretizes a rectangular block of 8.8 km by 6.8 km, stretching from the surface of Olkiluoto down to a depth of 10 km, where a geothermal heat flux is applied. The horizontal resolution of 30 m by 30 m in combination with – imposed by the thickness of different temporarily varying soil and rock layers imported from high resolution data - vertical resolutions of down to 10 cm result in a mesh containing 5 million nodes/elements on which the system of equations is solved using CSC’s HPC cluster mahti. The high spacial gradients in permeability (e.g. from soil to granitic bedrock) impose numerical challenges for the simulations that are forced by RCP 4.5 climate scenario. The investigated time-span contains cold periods between AD 47,000 and AD 110,000. Surface conditions are provided using freezing/thawing n-factors based on monthly temperature variations and wetness index defining varying conditions of vegetation. Our scenario run is able to project permafrost development at high spatial resolution and shows clear impact of permeable soil layers and faults in the bedrock that focus groundwater flow and solute transport.
How to cite: Zwinger, T., Cohen, D., and Koskinen, L.: Coupled permafrost-groundwater simulation applied to a spent fuel nuclear waste repository, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-26, https://doi.org/10.5194/egusphere-gc11-solidearth-26, 2023.
The Olkiluoto spent nuclear fuel repository in Eurajoki, Finland, is the first one on the planet that will go operational in foreseeable future. The long-term safety of this repository with respect to future ice-age conditions and the consequently occurring permafrost and altered groundwater flow conditions needs to be evaluated. To this end, a Darcy model for saturated aquifer groundwater flow combined with a heat transfer module accounting for phase change (i.e. freezing) as well as a solute and a bedrock deformation model have been implemented in the multi-physics Finite Element Method code Elmer. The set of equations is based on continuum thermo-mechanic principles. The application of this newly developed model to Olkiluoto aims to simulate the evolution of permafrost thickness, talik development, and groundwater flow and salinity changes at and around the repository during the next 120,000 years. This is achieved by solving the aforementioned model components in a coupled way in three dimensions on the mesh that discretizes a rectangular block of 8.8 km by 6.8 km, stretching from the surface of Olkiluoto down to a depth of 10 km, where a geothermal heat flux is applied. The horizontal resolution of 30 m by 30 m in combination with – imposed by the thickness of different temporarily varying soil and rock layers imported from high resolution data - vertical resolutions of down to 10 cm result in a mesh containing 5 million nodes/elements on which the system of equations is solved using CSC’s HPC cluster mahti. The high spacial gradients in permeability (e.g. from soil to granitic bedrock) impose numerical challenges for the simulations that are forced by RCP 4.5 climate scenario. The investigated time-span contains cold periods between AD 47,000 and AD 110,000. Surface conditions are provided using freezing/thawing n-factors based on monthly temperature variations and wetness index defining varying conditions of vegetation. Our scenario run is able to project permafrost development at high spatial resolution and shows clear impact of permeable soil layers and faults in the bedrock that focus groundwater flow and solute transport.
How to cite: Zwinger, T., Cohen, D., and Koskinen, L.: Coupled permafrost-groundwater simulation applied to a spent fuel nuclear waste repository, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-26, https://doi.org/10.5194/egusphere-gc11-solidearth-26, 2023.
GC11-solidearth-27 | Poster | Session 3
Risk assessment and mitigation of induced seismicity for geo-energy related applications at the basin scaleHaiqing Wu, Jian Xie, and Victor Vilarrasa
Fluid injection-induced earthquakes involve a series of complex physical processes. Evaluating these processes at the basin scale requires an amount of input data and a super computational ability to solve in near-real time risk analysis, which remains the most critical challenge in geo-energy related applications. Although the current computational tools can achieve a good simulation for the field scale problems, they are far away from the requirements of the basin scale analysis. Alternatively, we can apply verified analytical solutions of certain processes to speed the whole calculations when moving from the field to the basin scale. With this in mind, we adopt the analytical solutions for pore pressure diffusion and stress variations due to fluid injection into the reservoir. With the superposition principle, the analytical solutions can address the coupling problem of multi-injection wells at the basin scale. We then assess faults stability and the associated induced seismicity potential using the hydro-mechanical perturbations throughout the basin computed analytically.
To handle the uncertainty of geological properties, including the fault and reservoir geometries, hydraulic and mechanical properties, we perform Monte Carlo simulations to analyze their effects on induced seismicity potential. Such comprehensive parametric space analysis currently represents an insurmountable obstacle to be solved numerically, even calculating the problem in parallel. We propose a feasible methodology to mitigate the magnitude of induced seismicity, and even to avoid large earthquakes for subsurface energy-related projects, based on the results obtained both at the field and basin scales. This development will represent a great tool for risk evaluation of induced earthquakes not only in the period of site selection, but also in the whole lifetime of geo-energy projects.
How to cite: Wu, H., Xie, J., and Vilarrasa, V.: Risk assessment and mitigation of induced seismicity for geo-energy related applications at the basin scale, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-27, https://doi.org/10.5194/egusphere-gc11-solidearth-27, 2023.
Fluid injection-induced earthquakes involve a series of complex physical processes. Evaluating these processes at the basin scale requires an amount of input data and a super computational ability to solve in near-real time risk analysis, which remains the most critical challenge in geo-energy related applications. Although the current computational tools can achieve a good simulation for the field scale problems, they are far away from the requirements of the basin scale analysis. Alternatively, we can apply verified analytical solutions of certain processes to speed the whole calculations when moving from the field to the basin scale. With this in mind, we adopt the analytical solutions for pore pressure diffusion and stress variations due to fluid injection into the reservoir. With the superposition principle, the analytical solutions can address the coupling problem of multi-injection wells at the basin scale. We then assess faults stability and the associated induced seismicity potential using the hydro-mechanical perturbations throughout the basin computed analytically.
To handle the uncertainty of geological properties, including the fault and reservoir geometries, hydraulic and mechanical properties, we perform Monte Carlo simulations to analyze their effects on induced seismicity potential. Such comprehensive parametric space analysis currently represents an insurmountable obstacle to be solved numerically, even calculating the problem in parallel. We propose a feasible methodology to mitigate the magnitude of induced seismicity, and even to avoid large earthquakes for subsurface energy-related projects, based on the results obtained both at the field and basin scales. This development will represent a great tool for risk evaluation of induced earthquakes not only in the period of site selection, but also in the whole lifetime of geo-energy projects.
How to cite: Wu, H., Xie, J., and Vilarrasa, V.: Risk assessment and mitigation of induced seismicity for geo-energy related applications at the basin scale, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-27, https://doi.org/10.5194/egusphere-gc11-solidearth-27, 2023.
GC11-solidearth-28 | Poster | Session 3
Optimal source selection for local probabilistic tsunami hazard analysisAlice Abbate, Manuela Volpe, Fabrizio Romano, Roberto Tonini, Gareth Davies, Roberto Basili, and Stefano Lorito
Local hazard models for evacuation planning should accurately describe the probability of exceeding a certain intensity (e.g. flow depth, current velocity, etc.) over a period of years.
Computational-based probabilistic forecasting for earthquake-generated tsunamis deals with tens of thousands to millions of scenarios to be simulated over very large domains and with sufficient spatial resolution of the bathymetry model. The associated high computational cost can be tackled by means of workflows that take advantage of HPC facilities and numerical models specifically designed for multi-GPU architectures.
For the sake of feasibility, Seismic Probabilistic Tsunami Hazard Assessment (S-PTHA) at local scale exploits some approximations in both source and tsunami modeling, but uncertainty quantification is still lacking in the estimates. Here, we propose a possible approach to reduce the computational cost of local-scale S-PTHA, while providing uncertainty quantification.
The algorithm performs an efficient selection of scenarios based on the tsunami impact on a site.
The workflow is thought to take advantage of parallel execution on HPC clusters. Hence, as a first step, the whole ensemble of scenarios is split into a finite number of regions defined by the tectonic regionalization; then the procedure selects the scenarios mainly contributing to the hazard at an offshore point (in front of the target site) and for specific intensity levels. Finally, for each intensity level, the totality of synthetic tsunamigenic earthquakes is optimally sampled with replacement in a Monte Carlo Importance Sampling scheme.
The tsunamis potentially triggered by the selected scenarios are explicitly simulated with the GPU-based Tsunami-HySEA nonlinear shallow water code on high spatial resolution grids (up to 10 m) and subsequently the Monte Carlo errors are finally propagated to the onshore estimates.
This procedure allows for lessening the computational cost of local S-PTHA by reducing the number of simulations to be conducted while quantifying the epistemic uncertainties associated with the inundation modeling without appreciable losses of information content.
How to cite: Abbate, A., Volpe, M., Romano, F., Tonini, R., Davies, G., Basili, R., and Lorito, S.: Optimal source selection for local probabilistic tsunami hazard analysis, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-28, https://doi.org/10.5194/egusphere-gc11-solidearth-28, 2023.
Local hazard models for evacuation planning should accurately describe the probability of exceeding a certain intensity (e.g. flow depth, current velocity, etc.) over a period of years.
Computational-based probabilistic forecasting for earthquake-generated tsunamis deals with tens of thousands to millions of scenarios to be simulated over very large domains and with sufficient spatial resolution of the bathymetry model. The associated high computational cost can be tackled by means of workflows that take advantage of HPC facilities and numerical models specifically designed for multi-GPU architectures.
For the sake of feasibility, Seismic Probabilistic Tsunami Hazard Assessment (S-PTHA) at local scale exploits some approximations in both source and tsunami modeling, but uncertainty quantification is still lacking in the estimates. Here, we propose a possible approach to reduce the computational cost of local-scale S-PTHA, while providing uncertainty quantification.
The algorithm performs an efficient selection of scenarios based on the tsunami impact on a site.
The workflow is thought to take advantage of parallel execution on HPC clusters. Hence, as a first step, the whole ensemble of scenarios is split into a finite number of regions defined by the tectonic regionalization; then the procedure selects the scenarios mainly contributing to the hazard at an offshore point (in front of the target site) and for specific intensity levels. Finally, for each intensity level, the totality of synthetic tsunamigenic earthquakes is optimally sampled with replacement in a Monte Carlo Importance Sampling scheme.
The tsunamis potentially triggered by the selected scenarios are explicitly simulated with the GPU-based Tsunami-HySEA nonlinear shallow water code on high spatial resolution grids (up to 10 m) and subsequently the Monte Carlo errors are finally propagated to the onshore estimates.
This procedure allows for lessening the computational cost of local S-PTHA by reducing the number of simulations to be conducted while quantifying the epistemic uncertainties associated with the inundation modeling without appreciable losses of information content.
How to cite: Abbate, A., Volpe, M., Romano, F., Tonini, R., Davies, G., Basili, R., and Lorito, S.: Optimal source selection for local probabilistic tsunami hazard analysis, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-28, https://doi.org/10.5194/egusphere-gc11-solidearth-28, 2023.
GC11-solidearth-29 | Poster | Session 3
Tsunami evacuation using an agent-based model in ChileNatalia Zamora, Jorge León, Alejandra Gubler, and Patricio Catalán
Tsunami evacuation planning can be crucial to mitigate the impact on lives. During evacuation procedures, vertical evacuation can be an effective way to provide protection to people if horizontal evacuation is not feasible. However, this can imply an associated risk if the different scenarios related not only to the uncertainties of the tsunami phenomena, but also to the behavior of people during an evacuation phase are not considered. For this reason, in recent years, tsunami risk management in Chile has incorporated the propagation of uncertainties in each phase of the study of tsunami impacts and the design of evacuation routes. Agent-based models allow coupling inundation tsunami scenarios and the people's interactions and decision-making. In this research, thousands of tsunami scenarios are considered to establish tsunami hazard mapping based on flow depths and tsunami time arrivals. We chose a worst-case scenario from this database and coupled it with an agent-based model to assess tsunami evacuation in Viña del Mar, Chile. Moreover, we examined an improved situation with the same characteristics, but including 11 tsunami vertical-evacuation (TVE) facilities. Our findings show that the tsunami flood might lead to significant human casualties in the case of a worst-case scenario (above 50% of the agents). Nevertheless, including the TVE structures could reduce this number by roughly 10%. Future work will include propagation of uncertainties also in all the phases of the evacuation where HPC will aid on the simulations of agent-based models that require intense computational resources.
How to cite: Zamora, N., León, J., Gubler, A., and Catalán, P.: Tsunami evacuation using an agent-based model in Chile, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-29, https://doi.org/10.5194/egusphere-gc11-solidearth-29, 2023.
Tsunami evacuation planning can be crucial to mitigate the impact on lives. During evacuation procedures, vertical evacuation can be an effective way to provide protection to people if horizontal evacuation is not feasible. However, this can imply an associated risk if the different scenarios related not only to the uncertainties of the tsunami phenomena, but also to the behavior of people during an evacuation phase are not considered. For this reason, in recent years, tsunami risk management in Chile has incorporated the propagation of uncertainties in each phase of the study of tsunami impacts and the design of evacuation routes. Agent-based models allow coupling inundation tsunami scenarios and the people's interactions and decision-making. In this research, thousands of tsunami scenarios are considered to establish tsunami hazard mapping based on flow depths and tsunami time arrivals. We chose a worst-case scenario from this database and coupled it with an agent-based model to assess tsunami evacuation in Viña del Mar, Chile. Moreover, we examined an improved situation with the same characteristics, but including 11 tsunami vertical-evacuation (TVE) facilities. Our findings show that the tsunami flood might lead to significant human casualties in the case of a worst-case scenario (above 50% of the agents). Nevertheless, including the TVE structures could reduce this number by roughly 10%. Future work will include propagation of uncertainties also in all the phases of the evacuation where HPC will aid on the simulations of agent-based models that require intense computational resources.
How to cite: Zamora, N., León, J., Gubler, A., and Catalán, P.: Tsunami evacuation using an agent-based model in Chile, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-29, https://doi.org/10.5194/egusphere-gc11-solidearth-29, 2023.
GC11-solidearth-30 | Poster | Session 3
Exhaustive High-Performance Computing utilization in the estimation of the economic impact of tsunamis on Spanish coastlinesAlejandro Gonzalez del Pino, Marta Fernández, Miguel Llorente, Jorge Macías, Julián García-Mayordomo, and Carlos Paredes
Tsunamis are low-probability phenomena with high-risk potential. Lack of field data emphasizes the need of using simulation software to model the potential devastating effects of a tsunami and use this information to develop safety, sustainable actions and social resilience for the future. These measures may include, among many others, spatial planning; designing of evacuation routes; or the allocation of economic resources through insurance or other instruments to mitigate tsunami impacts. Our work introduces a Monte Carlo-like method for simulating the potential impact of tsunamis on the Spanish coastlines, specifically in the provinces of Huelva and Cádiz for the Atlantic region, and Balearic Islands, Ceuta, Melilla and eastern Iberian coast for the Mediterranean region. The method introduces a pseudo-probabilistic seismic-triggered tsunami simulation approach, by considering a particular selection of active faults with associated probabilistic distributions for some of the source parameters, and a Sobol’s sequences-based sampling strategy to generate a synthetic seismic catalogue. All roughly 4000 crafted seismic events are simulated along the areas of interest in high-resolution grids (five meters pixel resolution) using a two-way nested mesh approach, retrieving maximum water height, maximum mass flow and maximum modulus of the velocity at each grid cell. These numerical simulations are computed in a GPU environment, harnessing resources allocated in several high-performance computing (HPC) centres. HPC infrastructures play a crucial role in the computing aspect of the project, as the calculation power required to complete full-fledge high-resolution tsunami simulations in a reasonably time is expensive. The numerical database of retrieved variables generated throughout this study offers an excellent foundation for evaluating various tsunami-related hazards and risks.
The final resulting product focuses on generating frequency distributions for the economic impacts for the Spanish insurance sector (Consorcio de Compensación de Seguros, CCS). The CCS is a public-private entity insuring most natural catastrophic events in Spain. A consistent spatially-distributed economic database regarding insurance building-related values has been constructed and aggregated in conjunction with the numerical tsunami simulations. The proposed procedure allows to associate an economic impact indicator to each source. Further statistical analysis of the economic impact estimators yields to varied conclusions such as an improved definition of worst-case scenario (effect-based rather than worst-triggered), most and least likely economic impact, highest hazardous fault sources overall and locally and many others.
How to cite: Gonzalez del Pino, A., Fernández, M., Llorente, M., Macías, J., García-Mayordomo, J., and Paredes, C.: Exhaustive High-Performance Computing utilization in the estimation of the economic impact of tsunamis on Spanish coastlines, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-30, https://doi.org/10.5194/egusphere-gc11-solidearth-30, 2023.
Tsunamis are low-probability phenomena with high-risk potential. Lack of field data emphasizes the need of using simulation software to model the potential devastating effects of a tsunami and use this information to develop safety, sustainable actions and social resilience for the future. These measures may include, among many others, spatial planning; designing of evacuation routes; or the allocation of economic resources through insurance or other instruments to mitigate tsunami impacts. Our work introduces a Monte Carlo-like method for simulating the potential impact of tsunamis on the Spanish coastlines, specifically in the provinces of Huelva and Cádiz for the Atlantic region, and Balearic Islands, Ceuta, Melilla and eastern Iberian coast for the Mediterranean region. The method introduces a pseudo-probabilistic seismic-triggered tsunami simulation approach, by considering a particular selection of active faults with associated probabilistic distributions for some of the source parameters, and a Sobol’s sequences-based sampling strategy to generate a synthetic seismic catalogue. All roughly 4000 crafted seismic events are simulated along the areas of interest in high-resolution grids (five meters pixel resolution) using a two-way nested mesh approach, retrieving maximum water height, maximum mass flow and maximum modulus of the velocity at each grid cell. These numerical simulations are computed in a GPU environment, harnessing resources allocated in several high-performance computing (HPC) centres. HPC infrastructures play a crucial role in the computing aspect of the project, as the calculation power required to complete full-fledge high-resolution tsunami simulations in a reasonably time is expensive. The numerical database of retrieved variables generated throughout this study offers an excellent foundation for evaluating various tsunami-related hazards and risks.
The final resulting product focuses on generating frequency distributions for the economic impacts for the Spanish insurance sector (Consorcio de Compensación de Seguros, CCS). The CCS is a public-private entity insuring most natural catastrophic events in Spain. A consistent spatially-distributed economic database regarding insurance building-related values has been constructed and aggregated in conjunction with the numerical tsunami simulations. The proposed procedure allows to associate an economic impact indicator to each source. Further statistical analysis of the economic impact estimators yields to varied conclusions such as an improved definition of worst-case scenario (effect-based rather than worst-triggered), most and least likely economic impact, highest hazardous fault sources overall and locally and many others.
How to cite: Gonzalez del Pino, A., Fernández, M., Llorente, M., Macías, J., García-Mayordomo, J., and Paredes, C.: Exhaustive High-Performance Computing utilization in the estimation of the economic impact of tsunamis on Spanish coastlines, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-30, https://doi.org/10.5194/egusphere-gc11-solidearth-30, 2023.
GC11-solidearth-32 | Poster | Session 3
Numerical simulation of injection-induced seismicityVictor Vilarrasa, Haiqing Wu, Iman Vaezi, Sri Kalyan Tangirala, Auregan Boyet, Silvia De Simone, and Iman R. Kivi
Geo-energies, such as geothermal energy, geologic carbon storage, and subsurface energy storage, will play a relevant role in reaching carbon neutrality and allowing net-carbon removal towards the midcentury. Geo-energies imply fluid injection into and/or production from the subsurface, which alter the initial effective stress state and may destabilize fractures and faults, thereby inducing seismicity. Understanding the processes that control induced seismicity is paramount to develop reliable forecasting tools to manage induced earthquakes and keep them below undesired levels. Accurately modeling the processes that occur during fracture/fault slip leading to induced seismicity is challenging because coupled thermo-hydro-mechanical-chemical (THMC) processes interact with each other: (1) fluid injection causes pore pressure buildup that changes total stress and deforms the rock, (2) deformation leads to permeability changes that affect pore pressure diffusion, (3) fluids reach the injection formation at a colder temperature than that of the rock, which cools down the vicinity of the well, causing changes in the fluid properties (density, viscosity, enthalpy, heat capacity) and cooling-induced stress reduction, (4) the injected fluids are not in chemical equilibrium with the host rock, leading to geochemical reactions of mineral dissolution/precipitation that may alter rock properties, in particular, the shear strength. In the framework of GEoREST (www.georest.eu), a Starting Grant from the European Research Council (ERC), we aim at developing forecasting tools for injection-induced seismicity by developing methodologies to efficiently simulate the coupled THMC processes that occur as a result of fluid injection, which allows us to improve the understanding of the mechanisms that trigger induced seismicity. To this end, we use the fully coupled finite element method software CODE_BRIGHT, which includes capabilities like friction following the Mohr-Coulomb failure criterion with strain weakening and dilatancy, enabling simulations of fracture/fault reactivation. Our investigations have already contributed to the understanding of the processes that induced the seismicity at the Enhanced Geothermal System (EGS) at Basel, Switzerland, at the Castor Underground Gas Storage, Spain, and the reservoir-induced seismicity at Nova Ponte, Brazil. To achieve scalability and speed up the calculations to eventually manage induced seismicity in real time, we intend to incorporate efficient state-of-the-art linear solvers, like HYPRE and PETSc, in CODE_BRIGHT.
How to cite: Vilarrasa, V., Wu, H., Vaezi, I., Tangirala, S. K., Boyet, A., De Simone, S., and Kivi, I. R.: Numerical simulation of injection-induced seismicity, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-32, https://doi.org/10.5194/egusphere-gc11-solidearth-32, 2023.
Geo-energies, such as geothermal energy, geologic carbon storage, and subsurface energy storage, will play a relevant role in reaching carbon neutrality and allowing net-carbon removal towards the midcentury. Geo-energies imply fluid injection into and/or production from the subsurface, which alter the initial effective stress state and may destabilize fractures and faults, thereby inducing seismicity. Understanding the processes that control induced seismicity is paramount to develop reliable forecasting tools to manage induced earthquakes and keep them below undesired levels. Accurately modeling the processes that occur during fracture/fault slip leading to induced seismicity is challenging because coupled thermo-hydro-mechanical-chemical (THMC) processes interact with each other: (1) fluid injection causes pore pressure buildup that changes total stress and deforms the rock, (2) deformation leads to permeability changes that affect pore pressure diffusion, (3) fluids reach the injection formation at a colder temperature than that of the rock, which cools down the vicinity of the well, causing changes in the fluid properties (density, viscosity, enthalpy, heat capacity) and cooling-induced stress reduction, (4) the injected fluids are not in chemical equilibrium with the host rock, leading to geochemical reactions of mineral dissolution/precipitation that may alter rock properties, in particular, the shear strength. In the framework of GEoREST (www.georest.eu), a Starting Grant from the European Research Council (ERC), we aim at developing forecasting tools for injection-induced seismicity by developing methodologies to efficiently simulate the coupled THMC processes that occur as a result of fluid injection, which allows us to improve the understanding of the mechanisms that trigger induced seismicity. To this end, we use the fully coupled finite element method software CODE_BRIGHT, which includes capabilities like friction following the Mohr-Coulomb failure criterion with strain weakening and dilatancy, enabling simulations of fracture/fault reactivation. Our investigations have already contributed to the understanding of the processes that induced the seismicity at the Enhanced Geothermal System (EGS) at Basel, Switzerland, at the Castor Underground Gas Storage, Spain, and the reservoir-induced seismicity at Nova Ponte, Brazil. To achieve scalability and speed up the calculations to eventually manage induced seismicity in real time, we intend to incorporate efficient state-of-the-art linear solvers, like HYPRE and PETSc, in CODE_BRIGHT.
How to cite: Vilarrasa, V., Wu, H., Vaezi, I., Tangirala, S. K., Boyet, A., De Simone, S., and Kivi, I. R.: Numerical simulation of injection-induced seismicity, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-32, https://doi.org/10.5194/egusphere-gc11-solidearth-32, 2023.
GC11-solidearth-33 | Poster | Session 3
A digital twin component for volcanic dispersal and falloutLeonardo Mingari, Arnau Folch, Alejandra Guerrero, Sara Barsotti, Talfan Barnie, Giovanni Macedonio, and Antonio Costa
A Digital Twin Component (DTC) provides users with digital replicas of different components of the Earth system through unified frameworks integrating real-time observations and state-of-the-art numerical models. Scenarios of extreme events for natural hazards can be studied from the genesis to propagation and impacts using a single DTC or multiple coupled DTCs. The EU DT-GEO project (2022-2025) is implementing a prototype digital twin on geophysical extremes consisting of 12 interrelated Digital Twin Components, intended as self-contained and containerised software entities embedding numerical model codes, management of real-time data streams and data assimilation methodologies. DTCs can be deployed and executed in centralized High Performance Computing (HPC) and cloud computing Research Infrastructures (RIs). In particular, the DTC-V2 is implementing an ensemble-based automated operational system for deterministic and probabilistic forecast of long-range ash dispersal and local-scale tephra fallout. The system continuously screens different ground-based and satellite-based data sources and a workflow is automatically triggered by a volcanic eruption to stream and pre-process data, its ingestion into the FALL3D dispersal model, a centralized or distributed HPC model execution, and the post-processing step. The DTCs will provide capability for analyses, forecasts, uncertainty quantification, and "what if" scenarios for natural and anthropogenic hazards, with a long-term ambition towards the Destination Earth mission-like initiative.
How to cite: Mingari, L., Folch, A., Guerrero, A., Barsotti, S., Barnie, T., Macedonio, G., and Costa, A.: A digital twin component for volcanic dispersal and fallout, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-33, https://doi.org/10.5194/egusphere-gc11-solidearth-33, 2023.
A Digital Twin Component (DTC) provides users with digital replicas of different components of the Earth system through unified frameworks integrating real-time observations and state-of-the-art numerical models. Scenarios of extreme events for natural hazards can be studied from the genesis to propagation and impacts using a single DTC or multiple coupled DTCs. The EU DT-GEO project (2022-2025) is implementing a prototype digital twin on geophysical extremes consisting of 12 interrelated Digital Twin Components, intended as self-contained and containerised software entities embedding numerical model codes, management of real-time data streams and data assimilation methodologies. DTCs can be deployed and executed in centralized High Performance Computing (HPC) and cloud computing Research Infrastructures (RIs). In particular, the DTC-V2 is implementing an ensemble-based automated operational system for deterministic and probabilistic forecast of long-range ash dispersal and local-scale tephra fallout. The system continuously screens different ground-based and satellite-based data sources and a workflow is automatically triggered by a volcanic eruption to stream and pre-process data, its ingestion into the FALL3D dispersal model, a centralized or distributed HPC model execution, and the post-processing step. The DTCs will provide capability for analyses, forecasts, uncertainty quantification, and "what if" scenarios for natural and anthropogenic hazards, with a long-term ambition towards the Destination Earth mission-like initiative.
How to cite: Mingari, L., Folch, A., Guerrero, A., Barsotti, S., Barnie, T., Macedonio, G., and Costa, A.: A digital twin component for volcanic dispersal and fallout, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-33, https://doi.org/10.5194/egusphere-gc11-solidearth-33, 2023.
GC11-solidearth-36 | Poster | Session 3
Integrating 3D physics-based earthquake simulations to seismic risk assessment: The case of Bogotá, Colombia.Andrea C. Riaño, Juan C. Reyes, Jacobo Bielak, Doriam Restrepo, Ricardo Taborda, and Luis E. Yamin
The basin beneath the greater metropolitan area of Bogotá, Colombia, consists of soft material deposits with shear wave velocity Vs ≤ 400 m/s that reach depths up to 425 m. Located on a high plateau in the eastern cordillera of the Colombian Andes, this highly populated urban area is subject to significant seismic hazards from local and regional fault systems. The potential ground motion amplification effects during earthquakes due to the presence of soft soil deposits and the surface and sub-surface topography constitute problems of great importance towards better understanding and estimating the seismic risk of the city. Given the scarcity of seismic data from large magnitude events, and in an effort to advance modern seismic hazard mapping for the region, this study aimed to develop a physics-based framework to generate synthetic ground records that can help to better understand the basin and other amplification effects during strong earthquake shaking in the region, and then to incorporate these effects into the estimation of seismic risk. To this end, a set of simulations were first conducted on Hercules, the wave propagation octree-based finite element simulator developed by the Quake Group at Carnegie Mellon University, to reproduce similar conditions to those observed in Bogotá during past seismic events (e.g., 2008 Quetame Earthquake) and to identify the impacts of hypothetical strong earthquakes scenarios. Then the results from these simulations were then integrated into a new software package for post-processing and assessing the seismic risk in the Bogotá region for different scenarios selected.
How to cite: Riaño, A. C., Reyes, J. C., Bielak, J., Restrepo, D., Taborda, R., and Yamin, L. E.: Integrating 3D physics-based earthquake simulations to seismic risk assessment: The case of Bogotá, Colombia., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-36, https://doi.org/10.5194/egusphere-gc11-solidearth-36, 2023.
The basin beneath the greater metropolitan area of Bogotá, Colombia, consists of soft material deposits with shear wave velocity Vs ≤ 400 m/s that reach depths up to 425 m. Located on a high plateau in the eastern cordillera of the Colombian Andes, this highly populated urban area is subject to significant seismic hazards from local and regional fault systems. The potential ground motion amplification effects during earthquakes due to the presence of soft soil deposits and the surface and sub-surface topography constitute problems of great importance towards better understanding and estimating the seismic risk of the city. Given the scarcity of seismic data from large magnitude events, and in an effort to advance modern seismic hazard mapping for the region, this study aimed to develop a physics-based framework to generate synthetic ground records that can help to better understand the basin and other amplification effects during strong earthquake shaking in the region, and then to incorporate these effects into the estimation of seismic risk. To this end, a set of simulations were first conducted on Hercules, the wave propagation octree-based finite element simulator developed by the Quake Group at Carnegie Mellon University, to reproduce similar conditions to those observed in Bogotá during past seismic events (e.g., 2008 Quetame Earthquake) and to identify the impacts of hypothetical strong earthquakes scenarios. Then the results from these simulations were then integrated into a new software package for post-processing and assessing the seismic risk in the Bogotá region for different scenarios selected.
How to cite: Riaño, A. C., Reyes, J. C., Bielak, J., Restrepo, D., Taborda, R., and Yamin, L. E.: Integrating 3D physics-based earthquake simulations to seismic risk assessment: The case of Bogotá, Colombia., Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-36, https://doi.org/10.5194/egusphere-gc11-solidearth-36, 2023.
GC11-solidearth-46 | Poster | Session 3
Modeling Depth averaged velocity and Boundary Shear Stress distribution with complex flowsEbissa Kedir, Chandra Ojha, and Hari Prasad
In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i. e non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. A new expression for non-dimensional coefficient and integration constants were derived based on the novel boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays an important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.
Keywords: Depth-average velocity, Converging floodplain angles, Non-dimensional coefficient, Non-prismatic compound Channels
How to cite: Kedir, E., Ojha, C., and Prasad, H.: Modeling Depth averaged velocity and Boundary Shear Stress distribution with complex flows, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-46, https://doi.org/10.5194/egusphere-gc11-solidearth-46, 2023.
In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i. e non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. A new expression for non-dimensional coefficient and integration constants were derived based on the novel boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays an important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.
Keywords: Depth-average velocity, Converging floodplain angles, Non-dimensional coefficient, Non-prismatic compound Channels
How to cite: Kedir, E., Ojha, C., and Prasad, H.: Modeling Depth averaged velocity and Boundary Shear Stress distribution with complex flows, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-46, https://doi.org/10.5194/egusphere-gc11-solidearth-46, 2023.
Session 4 – Horizon Europe and EuroHPC Policies
GC11-solidearth-57 | Orals | Session 4
The European High Performance Computing Joint Undertaking (EuroHPC JU) – Leading the way in European SupercomputingLinda Gesenhues
The European High Performance Computing Joint Undertaking (EuroHPC JU) is a joint initiative pooling together the resources of the EU, 32 European countries & private partners, it has the objective of making Europe a world leader in supercomputing.
The EuroHPC JU is procuring and installing supercomputers across Europe. Wherever in Europe they are located, European scientists & users from the public sector and industry can benefit from these supercomputers. Free access is already being provided to European research organisations, with wider access planned for the future.
In parallel, the EuroHPC JU is funding an ambitious R&I programme to develop a full European supercomputing supply chain: from processors and software to applications to be run on these supercomputers & know-how to develop strong European expertise.
How to cite: Gesenhues, L.: The European High Performance Computing Joint Undertaking (EuroHPC JU) – Leading the way in European Supercomputing, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-57, https://doi.org/10.5194/egusphere-gc11-solidearth-57, 2023.
The European High Performance Computing Joint Undertaking (EuroHPC JU) is a joint initiative pooling together the resources of the EU, 32 European countries & private partners, it has the objective of making Europe a world leader in supercomputing.
The EuroHPC JU is procuring and installing supercomputers across Europe. Wherever in Europe they are located, European scientists & users from the public sector and industry can benefit from these supercomputers. Free access is already being provided to European research organisations, with wider access planned for the future.
In parallel, the EuroHPC JU is funding an ambitious R&I programme to develop a full European supercomputing supply chain: from processors and software to applications to be run on these supercomputers & know-how to develop strong European expertise.
How to cite: Gesenhues, L.: The European High Performance Computing Joint Undertaking (EuroHPC JU) – Leading the way in European Supercomputing, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-57, https://doi.org/10.5194/egusphere-gc11-solidearth-57, 2023.
GC11-solidearth-63 | Orals | Session 4
Access to MareNostrum5 and other European HPC infrastructuresOriol Pineda and Sergi Girona
The European Union, together with its Member States, are strongly investing in deploying a world-class, complete and diverse HPC infrastructure, through the recently-created European High-Performance Computing Joint Undertaking (EuroHPC). The consortium formed by Spain, Portugal and Turkey is one of the key actors of this initiative, participating together with EuroHPC in the acquisition and operation of MareNostrum5, that will reach a TCO above 200M€ over 5 years. This system will be accessible through the EuroHPC Extreme Scale calls, as well as the national calls from these three countries. Furthermore, BSC as hosting entity will provide direct user support and open scientific collaborations to research groups interested in using this infrastructure.
How to cite: Pineda, O. and Girona, S.: Access to MareNostrum5 and other European HPC infrastructures, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-63, https://doi.org/10.5194/egusphere-gc11-solidearth-63, 2023.
The European Union, together with its Member States, are strongly investing in deploying a world-class, complete and diverse HPC infrastructure, through the recently-created European High-Performance Computing Joint Undertaking (EuroHPC). The consortium formed by Spain, Portugal and Turkey is one of the key actors of this initiative, participating together with EuroHPC in the acquisition and operation of MareNostrum5, that will reach a TCO above 200M€ over 5 years. This system will be accessible through the EuroHPC Extreme Scale calls, as well as the national calls from these three countries. Furthermore, BSC as hosting entity will provide direct user support and open scientific collaborations to research groups interested in using this infrastructure.
How to cite: Pineda, O. and Girona, S.: Access to MareNostrum5 and other European HPC infrastructures, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-63, https://doi.org/10.5194/egusphere-gc11-solidearth-63, 2023.
GC11-solidearth-25 | Orals | Session 4
LUMI supercomputer for European researchersThomas Zwinger, Jussi Heikonen, and Pekka Manninen
LUMI supercomputer, which is currently #3 on the Top500 list and the most powerful system in Europa, started full production earlier this year. LUMI is jointly funded by EuroHPC JU and a consortium of ten countries led by CSC in Finland.
In this presentation we first discuss the architecture of LUMI from the user’s point of view. More precisely, we introduce the various partitions that make LUMI exceptionally versatile and suitable for a wide array of applications and workflows.
To fully harness the computing power of the system, programmers must be able to utilize the AMD MI250X GPUs of the system. Accordingly, we present the available GPU programming models and paradigms together with the performance analysis tools. We will provide information on the particular strategies to apply based on the initial situation of the application the user wants to be ported and deployed on LUMI; e.g. in terms of existing code-base, programming language, problem size, etc..
Finally, we discuss the access and support model: There are various modes and call for access available from both EuroHPC and the consortium countries. The support is handled by the distributed LUMI User Support Team to which all the consortium countries contribute. The consortium also runs a comprehensive training programme.
How to cite: Zwinger, T., Heikonen, J., and Manninen, P.: LUMI supercomputer for European researchers, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-25, https://doi.org/10.5194/egusphere-gc11-solidearth-25, 2023.
LUMI supercomputer, which is currently #3 on the Top500 list and the most powerful system in Europa, started full production earlier this year. LUMI is jointly funded by EuroHPC JU and a consortium of ten countries led by CSC in Finland.
In this presentation we first discuss the architecture of LUMI from the user’s point of view. More precisely, we introduce the various partitions that make LUMI exceptionally versatile and suitable for a wide array of applications and workflows.
To fully harness the computing power of the system, programmers must be able to utilize the AMD MI250X GPUs of the system. Accordingly, we present the available GPU programming models and paradigms together with the performance analysis tools. We will provide information on the particular strategies to apply based on the initial situation of the application the user wants to be ported and deployed on LUMI; e.g. in terms of existing code-base, programming language, problem size, etc..
Finally, we discuss the access and support model: There are various modes and call for access available from both EuroHPC and the consortium countries. The support is handled by the distributed LUMI User Support Team to which all the consortium countries contribute. The consortium also runs a comprehensive training programme.
How to cite: Zwinger, T., Heikonen, J., and Manninen, P.: LUMI supercomputer for European researchers, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-25, https://doi.org/10.5194/egusphere-gc11-solidearth-25, 2023.
The European supercomputer Leonardo is one of the three pre-exascale systems announced by EuroHPC Joint Undertaking and will be a significant step forward in raising European research in the field of computational science. CINECA has a long history in supplying the most powerful supercomputers in the world. The strong partnership with the EuroHPC initiative has led to the realization of the Leonardo project, a significant step forward in raising European research in the field of computational sciences.
Leonardo (ranked 4th in the last Top500 list), hosted and managed by CINECA's new data center located in the Technopole of Bologna, will be fully operational within the summer of this year.
Leonardo will have a strategic role in fostering national and international initiatives with a clear focus on Earth Science, actively supporting relevant projects and activities like the second phase (2023-2026) of the EuroHPC Center of Excellence for Exascale in Solid Earth (ChEESE-2P)1, Geo-INQUIRE2, DT-GEO3 and Destination Earth among the others.
In this talk, the Leonardo systems will be introduced, together with the way how CINECA will actively support the scientific ecosystem with emphasis on the support to the Earth community.
[1] funded by HORIZON-EUROHPC-JU-2021-COE-01 under the Grant Agreement No 101093038.
[2] 2022-2024, GA No 101058518
[3] 2022-2025, GA No 101058129
How to cite: Lanucara, P. and Amati, G.: Leonardo: A Simulator4Earth, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-55, https://doi.org/10.5194/egusphere-gc11-solidearth-55, 2023.
The European supercomputer Leonardo is one of the three pre-exascale systems announced by EuroHPC Joint Undertaking and will be a significant step forward in raising European research in the field of computational science. CINECA has a long history in supplying the most powerful supercomputers in the world. The strong partnership with the EuroHPC initiative has led to the realization of the Leonardo project, a significant step forward in raising European research in the field of computational sciences.
Leonardo (ranked 4th in the last Top500 list), hosted and managed by CINECA's new data center located in the Technopole of Bologna, will be fully operational within the summer of this year.
Leonardo will have a strategic role in fostering national and international initiatives with a clear focus on Earth Science, actively supporting relevant projects and activities like the second phase (2023-2026) of the EuroHPC Center of Excellence for Exascale in Solid Earth (ChEESE-2P)1, Geo-INQUIRE2, DT-GEO3 and Destination Earth among the others.
In this talk, the Leonardo systems will be introduced, together with the way how CINECA will actively support the scientific ecosystem with emphasis on the support to the Earth community.
[1] funded by HORIZON-EUROHPC-JU-2021-COE-01 under the Grant Agreement No 101093038.
[2] 2022-2024, GA No 101058518
[3] 2022-2025, GA No 101058129
How to cite: Lanucara, P. and Amati, G.: Leonardo: A Simulator4Earth, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-55, https://doi.org/10.5194/egusphere-gc11-solidearth-55, 2023.
GC11-solidearth-59 | Orals | Session 4
Accelerating Time-To-Science in Geophysical SimulationsIgnacio Sarasua and Filippo Spiga
Accelerated computing is nowadays, de-facto, accepted as the path forward to deploy large-scale energy-efficient scientific and technical computing (including Exascale). The positive side effect has been a tremendous opportunity for domain scientists to accelerate the pace of discovery and innovation, as well being capable to quickly respond and adapt to unforeseen natural scenarios by quickly deploy computational tools in support of coordinated mitigation strategies and on-the-ground responses (so called ‘urgent computing’, work pioneered by the ChEESE Eu Centre of Excellence). The purpose of this talk is to briefly introduce the NVIDIA platform, hardware and software, showcasing few examples of geophysical applications that have been successfully accelerated using NVIDIA GPU and set the stage for the future in computing which involve classic HPC simulations coupled or argument by AI methods.
How to cite: Sarasua, I. and Spiga, F.: Accelerating Time-To-Science in Geophysical Simulations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-59, https://doi.org/10.5194/egusphere-gc11-solidearth-59, 2023.
Accelerated computing is nowadays, de-facto, accepted as the path forward to deploy large-scale energy-efficient scientific and technical computing (including Exascale). The positive side effect has been a tremendous opportunity for domain scientists to accelerate the pace of discovery and innovation, as well being capable to quickly respond and adapt to unforeseen natural scenarios by quickly deploy computational tools in support of coordinated mitigation strategies and on-the-ground responses (so called ‘urgent computing’, work pioneered by the ChEESE Eu Centre of Excellence). The purpose of this talk is to briefly introduce the NVIDIA platform, hardware and software, showcasing few examples of geophysical applications that have been successfully accelerated using NVIDIA GPU and set the stage for the future in computing which involve classic HPC simulations coupled or argument by AI methods.
How to cite: Sarasua, I. and Spiga, F.: Accelerating Time-To-Science in Geophysical Simulations, Galileo Conference: Solid Earth and Geohazards in the Exascale Era, Barcelona, Spain, 23–26 May 2023, GC11-solidearth-59, https://doi.org/10.5194/egusphere-gc11-solidearth-59, 2023.