ITS2 – Responsible and intelligent data modelling, AI and machine learning strategies for climate, environment, planetary and space sciences

EGU22-486 | Presentations | ITS2.1/PS1.2

Enhancing planetary imagery with the holistic attention network algorithm 

Denis Maxheimer, Ioannis Markonis, Masner Jan, Curin Vojtech, Pavlik Jan, and Solomonidou Anezina

The recent developments in computer vision research in the field of Single Image Super Resolution (SISR)

can help improve the satellite imagery data quality and, thus, find application in planetary exploration.

The aim of this study is to enhance planetary surface imagery, in planetary bodies that there are

available data but in a low resolution. Here, we have applied the holistic attention network (HAN)

algorithm to a set of images of Saturn’s moon Titan from the Titan Radar Mapper instrument in its

Synthetic Aperture Radar (SAR) mode, which was on board the Cassini spacecraft. HAN can find

correlations among hierarchical layers, channels of each layer, and all positions of each channel, which

can be interpreted as an application and intersection of previously known models. The algorithm used

in our case-study was trained on 5000 grayscale images from HydroSHED Earth surface imagery dataset

resampled over different resolutions. Our experimental setup was to generate High Resolution (HR)

imagery from eight times lower resolution (x8 scale). We followed the standard workflow for this

purpose, which is to first train the network enhancing x2 scale to HR, then x4 scale to x2 scale, and

finally x8 scale to x4 scale, using subsequently the results of the previous training. The promising results

open a path for further applications of the trained model to improve the imagery data quality, and aid

in the detection and analysis of planetary surface features.

How to cite: Maxheimer, D., Markonis, I., Jan, M., Vojtech, C., Jan, P., and Anezina, S.: Enhancing planetary imagery with the holistic attention network algorithm, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-486, https://doi.org/10.5194/egusphere-egu22-486, 2022.

EGU22-692 | Presentations | ITS2.1/PS1.2

Autonomous lineament detection in Galileo images of Europa 

Caroline Haslebacher and Nicolas Thomas

Lineaments are prominent features on the surface of Jupiter's moon Europa. Analysing these linear features thoroughly leads to insights on their formation mechanisms and the interactions between the subsurface ocean and the surface. The orientation and position of lineaments is also important for determining the stress field on Europa. The Europa Clipper mission is planned to launch in 2024 and will fly by Europa more than 40 times. In the light of this, an autonomous lineament detection and segmentation tool would prove useful for processing the vast amount of expected images efficiently and would help to identify processes affecting the ice sheet. 

We have trained a convolutional neural network to detect, classify and segment lineaments in images of Europa returned by the Galileo mission. The Galileo images that make up the training set are segmented manually, following a dedicated guideline. For better performance, we make use of synthetically generated data to pre-train the network. The current status of the work will be described.

How to cite: Haslebacher, C. and Thomas, N.: Autonomous lineament detection in Galileo images of Europa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-692, https://doi.org/10.5194/egusphere-egu22-692, 2022.

EGU22-1014 | Presentations | ITS2.1/PS1.2

Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER II 

Emmanuel De Leon, Nicolas Gilet, Xavier Vallières, Luca Bucciantini, Pierre Henri, and Jean-Louis Rauch

The Waves of HIgh frequency and Sounder for Probing Electron density by Relaxation
(WHISPER) instrument, is part of the Wave Experiment Consortium (WEC) of the CLUSTER II
mission. The instrument consists of a receiver, a transmitter, and a wave spectrum
analyzer. It delivers active (when in sounding mode) and natural electric field spectra. The
characteristic signature of waves indicates the nature of the ambient plasma regime and, combined
with the spacecraft position, reveals the different magnetosphere boundaries and regions. The
thermal electron density can be deduced from the characteristics of natural waves in natural mode
and from the resonances triggered in sounding mode, giving access to a key parameter of scientific
interest and major driver for the calibration of particles instrument.
Until recently, the electron density derivation required a manual time/frequency domain
initialization of the search algorithms, based upon visual inspection of WHISPER active and natural
spectrograms and other datasets from different instruments onboard CLUSTER.
To automate this process, knowledge of the region (plasma regime) is highly desirable. A Multi-
Layer Perceptron model has been implemented for this purpose. For each detected region, a GRU,
recurrent network model combined with an ad-hoc algorithm is then used to determine the electron
density from WHISPER active spectra. These models have been trained using the electron density
previously derived from various semi-automatic algorithms and manually validated, resulting in an
accuracy up to 98% in some plasma regions. A production pipeline based on these models has been
implemented to routinely derive electron density, reducing human intervention up to 10 times. Work
is currently ongoing to create some models to process natural measurements where the data volume
is much higher and the validation process more complex. These models of electron density
automated determination will be useful for future other space missions.

How to cite: De Leon, E., Gilet, N., Vallières, X., Bucciantini, L., Henri, P., and Rauch, J.-L.: Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER II, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1014, https://doi.org/10.5194/egusphere-egu22-1014, 2022.

EGU22-2765 | Presentations | ITS2.1/PS1.2

Extrapolation of CRISM based spectral feature maps using CaSSIS four-band images with machine learning techniques 

Michael Fernandes, Nicolas Thomas, Benedikt Elser, Angelo Pio Rossi, Alexander Pletl, and Gabriele Cremonese

Spectroscopy provides important information on the surface composition of Mars. Spectral data can support studies such as the evaluation of potential (manned) landing sites as well as supporting determination of past surface processes. The CRISM instrument on NASA’s Mars Reconnaissance Orbiter is a high spectral resolution visible infrared mapping spectrometer currently in orbit around Mars. It records 2D spatially resolved spectra over a wavelength range of 362 nm to 3920 nm. At present data collected covers less than 2% of the planet. Lifetime issues with the cryo-coolers prevents limits further data acquisition in the infrared band. In order to extend areal coverage for spectroscopic analysis in regions of major importance to the history of liquid water on Mars (e.g. Valles Marineris, Noachis Terra), we investigate whether data from other instruments can be fused to extrapolate spectral features in CRISMto these non-spectral imaged areas. The present work will use data from the CaSSIS instrument which is a high spatial resolution colour and stereo imager onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO). CaSSIS returns images at 4.5 m/px from the nominal 400 km altitude orbit in four colours. Its filters were selected to provide mineral diagnostics in the visible wavelength range (400 – 1100 nm). It has so far imaged around 2% of the planet with an estimated overlap of ≲0.01% of CRISM data. This study introduces a two-step pixel based reconstruction approach using CaSSIS four band images. In the first step advanced unsupervised techniques are applied on CRISM hyperspectral datacubes to reduce dimensionality and establish clusters of spectral features. Given that these clusters contain reasonable information about the surface composition, in a second step, it is feasible to map CaSSIS four band images to the spectral clusters by training a machine learning classifier (for the cluster labels) using only CaSSIS datasets. In this way the system can extrapolate spectral features to areas unmapped by CRISM. To assess the performance of this proposed methodology we analyzed actual and artificially generated CaSSIS images and benchmarked results against traditional correlation based methods. Qualitative and quantitative analyses indicate that by this novel procedure spectral features of in non-spectral imaged areas can be predicted to an extent that can be evaluated quantitatively, especially in highly feature-rich landscapes.

How to cite: Fernandes, M., Thomas, N., Elser, B., Rossi, A. P., Pletl, A., and Cremonese, G.: Extrapolation of CRISM based spectral feature maps using CaSSIS four-band images with machine learning techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2765, https://doi.org/10.5194/egusphere-egu22-2765, 2022.

EGU22-2994 | Presentations | ITS2.1/PS1.2

Interpretable Solar Flare Prediction with Deep Learning 

Robert Jarolim, Astrid Veronig, Tatiana Podladchikova, Julia Thalmann, Dominik Narnhofer, Markus Hofinger, and Thomas Pock

Solar flares and coronal mass ejections (CMEs) are the main drivers for severe space weather disturbances on Earth and other planets. While the geo-effects of CMEs give us a lead time of about 1 to 4 days, the effects of flares and flare-accelerated solar energetic particles (SEPs) are very immediate, 8 minutes for the enhanced radiation and as short as about 20 minutes for the highest energy SEPs arriving at Earth. Thus, predictions of solar flare occurrence at least several hours ahead are of high importance for the mitigation of severe space weather effects.

Observations and simulations of solar flares suggest that the structure and evolution of the active region’s magnetic field is a key component for energetic eruptions. The recent advances in deep learning provide tools to directly learn complex relations from multi-dimensional data. Here, we present a novel deep learning method for short-term solar flare prediction. The algorithm is based on the HMI photospheric line-of-sight magnetic field and its temporal evolution together with the coronal evolution as observed by multi-wavelengths EUV filtergrams from the AIA instrument onboard the Solar Dynamics Observatory. We train a neural network to independently identify features in the imaging data based on the dynamic evolution of the coronal structure and the photospheric magnetic field evolution, which may hint at flare occurrence in the near future.

We show that our method  can reliably predict flares six hours ahead, with 73% correct flaring predictions (89% when considering only M- and X-class flares), and 83% correct quiet active region predictions.

In order to overcome the “black box problem” of machine-learning algorithms, and thus to allow for physical interpretation of the network findings, we employ a spatio-temporal attention mechanism. This allows us to extract the emphasized regions, which reveal the neural network interpretation of the flare onset conditions. Our comparison shows that predicted precursors are associated with the position of flare occurrence, respond to dynamic changes, and align with characteristics within the active region.

How to cite: Jarolim, R., Veronig, A., Podladchikova, T., Thalmann, J., Narnhofer, D., Hofinger, M., and Pock, T.: Interpretable Solar Flare Prediction with Deep Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2994, https://doi.org/10.5194/egusphere-egu22-2994, 2022.

EGU22-5721 | Presentations | ITS2.1/PS1.2

Magnetopause and bow shock models with machine learning 

Ambre Ghisalberti, Nicolas Aunai, and Bayane Michotte de Welle

The magnetopause (MP) and the bow shock (BS) are the two boundaries bounding the magnetosheath, the region between the magnetosphere and the solar wind. Their position and shape depend on the upstream solar wind and interplanetary magnetic field conditions.

Predicting their shape and position is the starting point of many subsequent studies of processes controlling the coupling between the Earth’s magnetosphere and its interplanetary environment. We now have at our disposal an important amount of data from a multitude of spacecraft missions allowing for good spatial coverage, as well as algorithms based on statistical learning to automatically detect the two boundaries. From the data of 9 satellites over 20 years, we identified around 19000 crossings of the BS and 36000 crossings of the MP. They were used, together with their associated upstream conditions, to train a regression model to predict the shape and position of the boundaries. 

Preliminary results indicate that the obtained models outperform analytical models without making simplifying assumptions on the geometry and the dependency over control parameters.

How to cite: Ghisalberti, A., Aunai, N., and Michotte de Welle, B.: Magnetopause and bow shock models with machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5721, https://doi.org/10.5194/egusphere-egu22-5721, 2022.

EGU22-5739 | Presentations | ITS2.1/PS1.2

Deep learning for surrogate modeling of two-dimensional mantle convection 

Siddhant Agarwal, Nicola Tosi, Pan Kessel, Doris Breuer, and Grégoire Montavon

Mantle convection plays a fundamental role in the long-term thermal evolution of terrestrial planets like Earth, Mars, Mercury and Venus. The buoyancy-driven creeping flow of silicate rocks in the mantle is modeled as a highly viscous fluid over geological time scales and quantified using partial differential equations (PDEs) for conservation of mass, momentum and energy. Yet, key parameters and initial conditions to these PDEs are poorly constrained and often require a large sampling of the parameter space to find constraints from observational data. Since it is not computationally feasible to solve hundreds of thousands of forward models in 2D or 3D, some alternatives have been proposed. 

The traditional alternative to high-fidelity simulations has been to use 1D models based on scaling laws. While computationally efficient, these are limited in the amount of physics they can model (e.g., depth-dependent material properties) and predict only mean quantities such as the mean mantle temperature. Hence, there has been a growing interest in machine learning techniques to come up with more advanced surrogate models. For example, Agarwal et al. (2020) used feedforward neural networks (FNNs) to reliably predict the evolution of entire 1D laterally averaged temperature profile in time from five parameters: reference viscosity, enrichment factor for the crust in heat producing elements, initial mantle temperature, activation energy and activation volume of the diffusion creep. 

We extend that study to predict the full 2D temperature field, which contains more information in the form of convection structures such as hot plumes and cold downwellings. This is achieved by training deep learning algorithms on a data set of 10,525 2D simulations of the thermal evolution of the mantle of a Mars-like planet. First, we use convolutional autoencoders to compress the size of each temperature field by a factor of 142. Second,  we compare the use of two algorithms for predicting the compressed (latent) temperature fields: FNNs and long-short-term memory networks (LSTMs).  On the one hand, the FNN predictions are slightly more accurate with respect to unseen simulations (99.30%  vs. 99.22% for the LSTM). On the other hand, Proper orthogonal decomposition (POD) of the LSTM and FNN predictions shows that despite a lower mean relative accuracy, LSTMs capture the flow dynamics better than FNNs. The POD coefficients from FNN predictions sum up to 96.51% relative to the coefficients of the original simulations, while for LSTMs this metric increases to 97.66%. 

We conclude the talk by stating some strengths and weaknesses of this approach, as well as highlighting some ongoing research in the broader field of fluid dynamics that could help increase the accuracy and efficiency of such parameterized surrogate models.

How to cite: Agarwal, S., Tosi, N., Kessel, P., Breuer, D., and Montavon, G.: Deep learning for surrogate modeling of two-dimensional mantle convection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5739, https://doi.org/10.5194/egusphere-egu22-5739, 2022.

EGU22-6371 | Presentations | ITS2.1/PS1.2

STIX solar flare image reconstruction and classification using machine learning 

Hualin Xiao, Säm Krucker, Daniel Ryan, Andrea Battaglia, Erica Lastufka, Etesi László, Ewan Dickson, and Wen Wang

The Spectrometer Telescope for Imaging X-rays (STIX) is an instrument onboard Solar Orbiter. It measures X-rays emitted during solar flares in the energy range from 4 to 150 keV and takes X-ray images by using an indirect imaging technique, based on the Moiré effect. STIX instrument
consists of 32 pairs of tungsten grids and 32 pixelated CdTe detector units. Flare Images can be reconstructed on the ground using algorithms such as back-projection, forward-fit, and maximum-entropy after full pixel data are downloaded. Here we report a new image reconstruction and
classification model based on machine learning. Results will be discussed and compared with those from the traditional algorithms.

How to cite: Xiao, H., Krucker, S., Ryan, D., Battaglia, A., Lastufka, E., László, E., Dickson, E., and Wang, W.: STIX solar flare image reconstruction and classification using machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6371, https://doi.org/10.5194/egusphere-egu22-6371, 2022.

EGU22-8940 | Presentations | ITS2.1/PS1.2

Mars events polyphonic detection, segmentation and classification with a hybrid recurrent scattering neural network using InSight mission data 

Salma Barkaoui, Angel Bueno Rodriguez, Philippe Lognonné, Maarten De Hoop, Grégory Sainton, Mathieu Plasman, and Taichi kawamura

Since deployed on the Martian surface, the seismometer SEIS (Seismic Experiment for Interior Structure) and the APSS (Auxiliary Payload Sensors Suite) of the InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission have been recorded the daily Martian respectively ground acceleration and pressure. These data are essential to investigate the geophysical and atmospheric features of the red planet. So far, the InSight team were able to detect multiple Martian events. We distinguish two types: the artificial events like the lander modes or the micro-tilts known as glitches or the natural events like the pressure drops which are important to estimate the Martian subsurface and the seismic events used to study the interior structure of Mars. Despite the data complexity, the InSight team was able to catalog these events (Clinton et al 2020 for the seismic event catalog, Banfield et al., 2018, 2020 for the pressure drops catalog and Scholz et al. (2020) for the glitches catalog). However, despite all this effort, we are still in front of multiple challenges. In fact,  the seismic events' detection is limited  due to the SEIS sensitivity, which is the origin of a high noise level that may contaminate the seismic events. Thus, we can miss some of them, especially in the noisy period. Besides, their detection is very challenging and require multiple preprocessing task which is time-consuming. For the pressure drops, the detection method used in Banfield et al.  2020 is limited by a threshold equal to 0.3 Pa. Thus, the rest of pressure drops are not included. Plus, due to lack of energy, the pressure sensor was off for several days. As a result, many pressure drops were missed. As a result, being able to detect them directly on the SEIS data which are, in contrast,  provided continuously, is very important.

In this regard, the aim of this study is to overcome these challenges and thus improve the Martian events detection and provide an updated catalog automatically. For that, we were inspired of one of the main technics used today in data processing and analysis in a complete automatic way: it is the Machine Learning and particularly in our case is the Deep Learning. The architecture used for that is the “Hybrid Recurrent Scattering Neural Network” (Bueno et al 2021)  adapted for Mars

How to cite: Barkaoui, S., Bueno Rodriguez, A., Lognonné, P., De Hoop, M., Sainton, G., Plasman, M., and kawamura, T.: Mars events polyphonic detection, segmentation and classification with a hybrid recurrent scattering neural network using InSight mission data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8940, https://doi.org/10.5194/egusphere-egu22-8940, 2022.

EGU22-9077 | Presentations | ITS2.1/PS1.2

Automatic Detection of Interplanetary Coronal Mass Ejections 

Hannah Ruedisser, Andreas Windisch, Ute V. Amerstorfer, Tanja Amerstorfer, Christian Möstl, Martin A. Reiss, and Rachel L. Bailey

Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past,
different machine learning approaches have been used to automatically detect events in existing time series resulting from
solar wind in situ data. However, classification, early detection and ultimately forecasting still remain challenges when facing
the large amount of data from different instruments. We propose a pipeline using a Network similar to the ResUNet++ (Jha et al. (2019)), for the automatic detection of ICMEs. Comparing it to an existing method, we find that while achieving similar results, our model outperforms the baseline regarding GPU usage, training time and robustness to missing features, thus making it more usable for other datasets.
The method has been tested on in situ data from WIND. Additionally, it produced reasonable results on STEREO A and STEREO B datasets
with less input parameters. The relatively fast training allows straightforward tuning of hyperparameters and could therefore easily be used to detect other structures and phenomena in solar wind data, such as corotating interaction regions.

How to cite: Ruedisser, H., Windisch, A., Amerstorfer, U. V., Amerstorfer, T., Möstl, C., Reiss, M. A., and Bailey, R. L.: Automatic Detection of Interplanetary Coronal Mass Ejections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9077, https://doi.org/10.5194/egusphere-egu22-9077, 2022.

EGU22-9621 | Presentations | ITS2.1/PS1.2

Machine Learning Techniques for Automated ULF Wave Recognition in Swarm Time Series 

Georgios Balasis, Alexandra Antonopoulou, Constantinos Papadimitriou, Adamantia Zoe Boutsi, Omiros Giannakis, and Ioannis A. Daglis

Machine learning (ML) techniques have been successfully introduced in the fields of Space Physics and Space Weather, yielding highly promising results in modeling and predicting many disparate aspects of the geospace. Magnetospheric ultra-low frequency (ULF) waves play a key role in the dynamics of the near-Earth electromagnetic environment and, therefore, their importance in Space Weather studies is indisputable. Magnetic field measurements from recent multi-satellite missions are currently advancing our knowledge on the physics of ULF waves. In particular, Swarm satellites have contributed to the expansion of data availability in the topside ionosphere, stimulating much recent progress in this area. Coupled with the new successful developments in artificial intelligence, we are now able to use more robust approaches for automated ULF wave identification and classification. Here, we present results employing various neural networks (NNs) methods (e.g. Fuzzy Artificial Neural Networks, Convolutional Neural Networks) in order to detect ULF waves in the time series of low-Earth orbit (LEO) satellites. The outputs of the methods are compared against other ML classifiers (e.g. k-Nearest Neighbors (kNN), Support Vector Machines (SVM)), showing a clear dominance of the NNs in successfully classifying wave events.

How to cite: Balasis, G., Antonopoulou, A., Papadimitriou, C., Boutsi, A. Z., Giannakis, O., and Daglis, I. A.: Machine Learning Techniques for Automated ULF Wave Recognition in Swarm Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9621, https://doi.org/10.5194/egusphere-egu22-9621, 2022.

The solar wind and its variability is well understood at Earth. However, at distances larger than 1AU the is less clear, mostly due to the lack of in-situ measurements. In this study we use transfer learning principles to infer solar wind conditions at Mars in periods where no measurements are available, with the aim of better illuminating the interaction between the partially magnetised Martian plasma environment and the upstream solar wind. Initially, a convolutional neural network (CNN) model for forecasting measurements of the interplanetary magnetic field, solar wind velocity, density and dynamic pressure is trained on terrestrial solar wind data. Afterwards, knowledge from this model is incorporated into a secondary CNN model which is used for predicting solar wind conditions upstream of Mars up to 5 hours in the future. We present the results of this study as well as the opportunities to expand this method for use at other planets.

How to cite: Durward, S.: Forecasting solar wind conditions at Mars using transfer learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10105, https://doi.org/10.5194/egusphere-egu22-10105, 2022.

EGU22-11501 | Presentations | ITS2.1/PS1.2

Automatic detection of solar magnetic tornadoes based on computer vision methods. 

Dmitrii Vorobev, Mark Blumenau, Mikhail Fridman, Olga Khabarova, and Vladimir Obridko

We propose a new method for automatic detection of solar magnetic tornadoes based on computer vision methods. Magnetic tornadoes are magneto-plasma structures with a swirling magnetic field in the solar corona, and there is also evidence for the rotation of plasma in them. A theoretical description and numerical modeling of these objects are very difficult due to the three-dimensionality of the structures and peculiarities of their spatial and temporal dynamics [Wedemeyer-Böhm et al, 2012, Nature]. Typical sizes of magnetic tornadoes vary from 102 km up to 106 km, and their lifetime is from several minutes to many hours. So far, quite a few works are devoted to their study, and there are no accepted algorithms for detecting solar magnetic tornadoes by machine methods. An insufficient number of identified structures is one of many problems that do not allow studying physics of magnetic tornadoes and the processes associated with them. In particular, the filamentous rotating structures are well delectable only at the limb, while one can only make suppositions about their presence at the solar disk.
Our method is based on analyzing SDO/AIA images at wavelengths 171 Å, 193 Å, 211 Å and 304 Å, to which several different algorithms are applied, namely, the convolution with filters, convolutional neural network, and gradient boosting. The new technique is a combination of several approaches (transfer learning & stacking) that are widely used in various fields of data analysis. Such an approach allows detecting the structures in a short time with sufficient accuracy. As test objects, we used magnetic tornadoes previously described in the literature [e.g., Wedemeyer et al 2013, ApJ; Mghebrishvili et al. 2015 ApJ]. Our method made it possible to detect those structures, as well as to reveal previously unknown magnetic tornadoes.

How to cite: Vorobev, D., Blumenau, M., Fridman, M., Khabarova, O., and Obridko, V.: Automatic detection of solar magnetic tornadoes based on computer vision methods., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11501, https://doi.org/10.5194/egusphere-egu22-11501, 2022.

EGU22-12480 | Presentations | ITS2.1/PS1.2

A versatile exploration method for simulated data based on Self Organizing Maps 

Maria Elena Innocenti, Sophia Köhne, Simon Hornisch, Rainer Grauer, Jorge Amaya, Jimmy Raeder, Banafsheh Ferdousi, James "Andy" Edmond, and Giovanni Lapenta

The large amount of data produced by measurements and simulations of space plasmas has made it fertile ground for the application of classification methods, that can support the scientist in preliminary data analysis. Among the different classification methods available, Self Organizing Maps, SOMs [Kohonen, 1982] offer the distinct advantage of producing an ordered, lower-dimensional representation of the input data that preserves their topographical relations. The 2D map obtained after training can then be explored to gather knowledge on the data it represents. The distance between nodes reflects the distance between the input data: one can then further cluster the map nodes to identify large scale regions in the data where plasma properties are expected to be similar.

In this work, we train SOMs using data from different simulations of different aspects of the heliospheric environment: a global magnetospheric simulation done with the OpenGGCM-CTIM-RCM code, a Particle In Cell simulation of plasmoid instability done with the semi-implicit code ECSIM, a fully kinetic simulation of single X point reconnection done with the Vlasov code implemented in MuPhy2.

We examine the SOM feature maps, unified distance matrix and SOM node weights to unlock information on the input data. We then classify the nodes of the different SOMs into a lower and automatically selected number of clusters, and we obtain, in all three cases, clusters that map well to our a priori knowledge on the three systems. Results for the magnetospheric simulations are described in Innocenti et al, 2021. 

This classification strategy then emerges as a useful, relatively cheap and versatile technique for the analysis of simulation, and possibly observational, plasma physics data.

Innocenti, M. E., Amaya, J., Raeder, J., Dupuis, R., Ferdousi, B., & Lapenta, G. (2021). Unsupervised classification of simulated magnetospheric regions. Annales Geophysicae Discussions, 1-28. 

https://angeo.copernicus.org/articles/39/861/2021/angeo-39-861-2021.pdf

How to cite: Innocenti, M. E., Köhne, S., Hornisch, S., Grauer, R., Amaya, J., Raeder, J., Ferdousi, B., Edmond, J. "., and Lapenta, G.: A versatile exploration method for simulated data based on Self Organizing Maps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12480, https://doi.org/10.5194/egusphere-egu22-12480, 2022.

EGU22-12830 | Presentations | ITS2.1/PS1.2

Re-implementing and Extending the NURD Algorithm to the Full Duration of the Van Allen Probes Mission 

Matyas Szabo-Roberts, Karolina Kume, Artem Smirnov, Irina Zhelavskaya, and Yuri Shprits

Generating reliable databases of electron density measurements over a wide range of geomagnetic conditions is essential for improving empirical models of electron density. The Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm has been developed for automated extraction of electron density from Van Allen Probes electric field measurements, and has been shown to be in good agreement with existing semi-automated methods and empirical models. The extracted electron density data has since then been used to develop the PINE (Plasma density in the Inner magnetosphere Neural network-based Empirical) model, an empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices.
In this study we re-implement the NURD algorithm in both Python and Matlab, and compare the performance of these implementations to each other and previous NURD results. We take advantage of a labeled training data set now being available for the full duration of the Van Allen Probes mission to train the network and generate an electron density data set for a significantly longer time period. We perform detailed comparisons between this output, electron density produced from Van Allen Probes electric field measurements using the AURA semi-automated algorithm, and electron density obtained from existing empirical models. We also present preliminary results from the PINE plasmasphere model trained on this extended NURD electron density data set.

How to cite: Szabo-Roberts, M., Kume, K., Smirnov, A., Zhelavskaya, I., and Shprits, Y.: Re-implementing and Extending the NURD Algorithm to the Full Duration of the Van Allen Probes Mission, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12830, https://doi.org/10.5194/egusphere-egu22-12830, 2022.

The ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management (FG-AI4NDM) explores the potential of AI to support the monitoring and detection, forecasting, and communication of natural disasters. Building on the presentation at EGU2021, we will show how detailed analysis of real-life use cases by an interdisciplinary, multistakeholder, and international community of experts is leading to the development of three technical reports (dedicated to best practices in data collection and handling, AI-based algorithms, and AI-based communications technologies, respectively), a roadmap of ongoing pre-standardization and standardization activities in this domain, a glossary of relevant terms and definitions, and educational materials to support capacity building. It is hoped that these deliverables will form the foundation of internationally recognized standards.

How to cite: Kuglitsch, M.: Nature can be disruptive, so can technology: ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8, https://doi.org/10.5194/egusphere-egu22-8, 2022.

EGU22-79 | Presentations | ITS2.5/NH10.8

Assessing the impact of sea-level rise on future compound flooding hazards in the Kapuas River delta 

Joko Sampurno, Valentin Vallaeys, Randy Ardianto, and Emmanuel Hanert

Compound flooding hazard in estuarine delta is increasing due to mean sea-level rise (SLR) as the impact of climate change. Decision-makers need future hazard analysis to mitigate the event and design adaptation strategies. However, to date, no future hazard analysis has been made for the Kapuas River delta, a low-lying area on the west coast of the island of Borneo, Indonesia. Therefore, this study aims to assess future compound flooding hazards under SLR over the delta, particularly in Pontianak (the densest urban area over the region). Here we consider three SLR scenarios due to climate change, i.e., low emission scenario (RCP2.6), medium emission scenario (RCP4.5), and high emission scenario (RCP8.5). We implement a machine-learning technique, i.e., the multiple linear regression (MLR) algorithm, to model the river water level dynamics within the city. We then predict future extreme river water levels due to interactions of river discharges, rainfalls, winds, and tides. Furthermore, we create flood maps with a likelihood of areas to be flooded in 100 years return period (1% annual exceedance probability) due to the expected sea-level rise. We find that the extreme 1% return water level for the study area in 2100 is increased from about 2.80 m (current flood frequency state) to 3.03 m (under the RCP2.6), to 3.13 m (under the RCP4.5), and 3.38 m (under the RCP8.5).

How to cite: Sampurno, J., Vallaeys, V., Ardianto, R., and Hanert, E.: Assessing the impact of sea-level rise on future compound flooding hazards in the Kapuas River delta, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-79, https://doi.org/10.5194/egusphere-egu22-79, 2022.

According to UNDRR2021, there are 389 reported disasters in 2020. Disasters claim the lives of 15,080 people, 98.4 million people are affected globally, and US171.3 billion dollars are spent on economic damage. International agreements such as the Sendai framework for disaster risk reduction encourage the use of social media to strengthen disaster risk communication. With the advent of new technologies, social media has emerged out to be an important source of information in disaster management, and there is an increase in social media activity whilst disasters. Social media is the fourth most used platform for accessing emergency information. People seek to contact family, friends and search for food, water, transportation, and shelter. During cataclysmic events, the critical information posted on social media is immersed in irrelevant information. To assist and streamline emergency situations, staunch methodologies are required for extracting relevant information. The research study explores new-fangled deep learning methods for automatically identifying the relevancy of disaster-related social media messages. The contributions of this study are three-fold. Firstly, we present a hybrid deep learning-based framework to ameliorate the classification of disaster-related social media messages. The data is gathered from the Twitter platform, using the Search Application Programming Interface. The messages that contain information regarding the need, availability of vital resources like food, water, electricity, etc., and provide situational information are categorized into relevant messages. The rest of the messages are categorized into irrelevant messages. To demonstrate the applicability and effectiveness of the proposed approach, it is applied to the thunderstorm and cyclone Fani dataset. Both the disasters happened in India in 2019. Secondly, the performance of the proposed approach is compared with baseline methods, i.e., convolutional neural network, long short-term memory network, bidirectional long short-term memory network. The results of the proposed approach outperform the baseline methods. The performance of the proposed approach is evaluated using multiple metrics. The considered evaluation metrics are accuracy, precision, recall, f-score, area under receiver operating curve, area under precision-recall curve. The accurate and inaccurate classifications are shown on both the datasets. Thirdly, to incorporate our evaluated models into a working application, we extend an existing application DisDSS, which has been granted copyright invention award by Government of India. We call the newly extended system DisDSS 2.0, which integrates our framework to address the disaster relevancy identification issue. The output from the research study is helpful for disaster managers to make effective decisions on time. It bridges the gap between the decision-makers and citizens during disasters through the lens of deep learning.

How to cite: Singla, A., Agrawal, R., and Garg, A.: DisDSS 2.0: A Multi-Hazard Web-based Disaster Management System to Identify Disaster-Relevancy of a Social Media Message for Decision-Making Using Deep Learning Techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-266, https://doi.org/10.5194/egusphere-egu22-266, 2022.

Background and objective: The fields of urban resilience to flooding and data science are on a collision course giving rise to the emerging field of smart resilience. The objective of this study is to propose and demonstrate a smart flood resilience framework that leverages various heterogeneous community-scale big data and infrastructure sensor data to enhance predictive risk monitoring and situational awareness.

Smart flood resilience framework: The smart flood resilience framework focuses on four core capabilities that could be augmented through the use of heterogeneous community-scale big data and analytics techniques: (1) predictive flood risk mapping: prediction capability of imminent flood risks (such as overflow of channels) to inform communities and emergency management agencies to take preparation and response actions; (2) automated rapid impact assessment: the ability to automatically and quickly evaluate the extent of flood impacts (i.e., physical, social, and economic impacts) to enable crisis responders and public officials to allocate relief and rescue resources on time; (3) predictive infrastructure failure prediction and monitoring: the ability to anticipate imminent failures in infrastructure systems as a flood event unfolds; and (4) smart situational awareness capabilities: the capability to derive proactive insights regarding the evolution of flood impacts (e.g., disrupted access to critical facilities and spatio-temporal patterns of recovery) on the communities.

Case study: We demonstrate the components of these core capabilities in the smart flood resilience framework in the context of the 2017 Hurricane Harvey in Harris. First, with Bayesian network modeling and deep learning methods, we reveal the use of flood sensor data for the prediction of floodwater overflow in channel networks and inundation of co-located road networks. Second, we discuss the use of social media data and machine learning techniques for assessing the impacts of floods on communities and sensing emotion signals to examine societal impacts. Third, we illustrate the use of high-resolution traffic data in network-theoretic models for now-casting of flood propagation on road networks and the disrupted access to critical facilities such as hospitals. Fourth, we leverage location-based and credit card transaction data in advanced spatial data analytics to proactively evaluate the recovery of communities and the impacts of floods on businesses.

Significances: This study shows that the significance of different core capabilities of the smart flood resilience framework in helping emergency managers, city planners, public officials, responders, and volunteers to better cope with the impacts of catastrophic flooding events.

How to cite: Mostafavi, A. and Yuan, F.: Smart Flood Resilience: Harnessing Community-Scale Big Data for Predictive Flood Risk Monitoring, Rapid Impact Assessment, and Situational Awareness, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-781, https://doi.org/10.5194/egusphere-egu22-781, 2022.

Overview:

Operations Risk Insight (ORI) with Watson is an IBM AI application on the cloud.  ORI analyzes thousands of news sources and alert services daily.  There are too many data sources, warnings, watches and advisories for an individual to understand.  For example, during a week in 2021 with record wildfires, hurricanes and COVID hotspots across the US, thousands of impacting risk events hit key points of interest to IBM globally and were analyzed in real time.  

Which events impacted IBM’s business, and which didn’t? ORI has saved IBM millions of dollars annually for the past 5 years.  Our non-profit disaster relief partners have used ORI to respond more effectively to the needs of the vulnerable groups impacted by disasters.  Find out how disaster response leaders identify severe risks using Watson, the Hybrid Cloud, Big Data, Machine Learning and AI.

Presentation Objectives:

The objectives of this session are:

  • Educate the audience on a pragmatic and relevant IBM internal use case for an AI on the Cloud application, using many Watson and The Weather Company API's, plus machine learning running on IBM's cloud.
  • Obtain feedback and suggestions from the audience on how to expand and improve the machine learning and data analysis for this application to expanded the value for natural disaster response leaders. .
  • Inspire others to create their own grass roots cognitive project and learn more about AI and cloud technologies.
  • Discuss how this relates to the Call for Code and is used by Disaster Relief Agencies for free to assist the most vulnerable in society.

References Links:  

  • ORI has been featured in two Cloud Pak for Data (CP4D) workbooks:  CP4D Watson Studio Tutorial on Risk Analysis: https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/f2ee8dbf-e6af-4b00-90ca-8f7fee77c377/view and the Flood Risk Project: https://dataplatform.dev.cloud.ibm.com/exchange/public/entry/view/def444923c771f3f20285820dc072eac  Each demonstrate the application and methods for Machine Learning to be applied to AI for Natural Disaster Management (NDM). 
  • IBM use case for non-profit partners: https://newsroom.ibm.com/ORI-nonprofits-disaster
  • NC Tech article: https://www.ednc.org/nonprofits-and-artificial-intelligence-join-forces-for-covid-19-relief/
  • Supply Chain Management Review (SCMR) interview: https://www.scmr.com/article/nextgen_supply_chain_interview_tom_ward
  • Supply Chain navigator article: http://scnavigator.avnet.com/article/january-2017/the-missing-link/

How to cite: Ward, T. and Kanwar, R.: IBM Operations Risk Insights with Watson:  a multi-hazard risk, AI for Natural Disaster Management use case, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1230, https://doi.org/10.5194/egusphere-egu22-1230, 2022.

EGU22-1510 | Presentations | ITS2.5/NH10.8

From virtual environment to real observations: short-term hydrological forecasts with an Artificial Neural Network model. 

Renaud Jougla, Manon Ahlouche, Morgan Buire, and Robert Leconte

Machine learning model approaches for hydrological forecasts are nowadays common in research. Artificial Neural Network (ANN) is one of the most popular due to its good performance on watersheds with different hydrologic regimes and over several timescales. A short-term (1 to 7 days ahead) forecast model was explored to predict streamflow. This study focused on the summer season defined from May to October. Cross-validation was done over a period of 16 years, each time keeping a single year as a validation set.

The ANN model was parameterized with a single hidden layer of 6 neurons. It was developed in a virtual environment based on datasets generated by the physically based distributed hydrological model Hydrotel (Fortin et al., 2012). In a preliminary analysis, several combinations of inputs were assessed, the best combining precipitation and temperature with surface soil moisture and antecedent streamflow. Different spatial discretizations were compared. A semi-distributed discretization was selected to facilitate transferring the ANN model from a virtual environment to real observations such as remote sensing soil moisture products or ground station time series.

Four watersheds were under study: the Au Saumon and Magog watersheds located in south Québec (Canada); the Androscoggin watershed in Maine (USA); and the Susquehanna watershed located in New-York and Pennsylvania (USA). All but the Susquehanna watershed are mainly forested, while the latter has a 57% forest cover. To evaluate whether a model with a data-driven structure can mimic a deterministic model, ANN and Hydrotel simulated flows were compared. Results confirm that the ANN model can reproduce streamflow output from Hydrotel with confidence.

Soil moisture observation stations were deployed in the Au Saumon and Magog watersheds during the summers 2018 to 2021. Meteorological data were extracted from the ERA5-Land reanalysis dataset. As the period of availability of observed data is short, the ANN model was trained in a virtual environment. Two validations were done: one in the virtual environment and one using real soil moisture observations and flows. The number and locations of the soil moisture probes slightly differed during each of the four summers. Therefore, four models were trained depending on the number of probes and their location. Results highlight that location of the soil moisture probes has a large influence on the ANN streamflow outputs and identifies more representative sub-regions of the watershed.

The use of remote sensing data as inputs of the ANN model is promising. Soil moisture datasets from SMOS and SMAP missions are available for the four watersheds under study, although downscaling approaches should be applied to bring the spatial resolution of those products at the watershed scale. One other future lead could be the development of a semi-distributed ANN model in virtual environment based on a restricted selection of hydrological units based on physiographic characteristics. The future L-band NiSAR product could be relevant for this purpose, having a finer spatial resolution compared to SMAP and SMOS and a better penetration of the signal in forested areas than C-band SAR satellites such as Sentinel-1 and the Radarsat Constellation Mission.

How to cite: Jougla, R., Ahlouche, M., Buire, M., and Leconte, R.: From virtual environment to real observations: short-term hydrological forecasts with an Artificial Neural Network model., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1510, https://doi.org/10.5194/egusphere-egu22-1510, 2022.

Tropical Cyclones (TCs) are deadly but rare events that cause considerable loss of life and property damage every year. Traditional TC forecasting and tracking methods focus on numerical forecasting models, synoptic forecasting and statistical methods. However, in recent years there have been several studies investigating applications of Deep Learning (DL) methods for weather forecasting with encouraging results.

We aim to test the efficacy of several DL methods for TC nowcasting, particularly focusing on Generative Adversarial Neural Networks (GANs) and Recurrent Neural Networks (RNNs). The strengths of these network types align well with the given problem: GANs are particularly apt to learn the form of a dataset, such as the typical shape and intensity of a TC, and RNNs are useful for learning timeseries data, enabling a prediction to be made based on the past several timesteps.

The goal is to produce a DL based pipeline to predict the future state of a developing cyclone with accuracy that measures up to current methods.  We demonstrate our approach based on learning from high-resolution numerical simulations of TCs from the Indian and Pacific oceans and discuss the challenges and advantages of applying these DL approaches to large high-resolution numerical weather data.

How to cite: Steptoe, H. and Xirouchaki, T.: Deep Learning for Tropical Cyclone Nowcasting: Experiments with Generative Adversarial and Recurrent Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1650, https://doi.org/10.5194/egusphere-egu22-1650, 2022.

EGU22-1662 | Presentations | ITS2.5/NH10.8

Exploring the challenges of Digital Twins for weather & climate through an Atmospheric Dispersion modelling prototype 

Stephen Haddad, Peter Killick, Aaron Hopkinson, Tomasz Trzeciak, Mark Burgoyne, and Susan Leadbetter

Digital Twins present a new user-centric paradigm for developing and using weather & climate simulations that is currently being widely embraced, for example through large projects such as Destination Earth led by ECMWF.  In this project we have taken a smaller scale approach in understanding the opportunities and challenges in translating the Digital Twin concept from the original domain of manufacturing and the built environment to modelling of the earth’s atmosphere.

We describe our approach to creating a Digital Twin based on the Met Office’s Atmospheric Dispersion simulation package called NAME. We will discuss the advantages of doing this, such as the ability of nonexpert users to more easily produce scientifically valid simulations of dispersion events, such as industrial fires, and easily obtain results to feed into downstream analysis, for example of health impacts. We will describe the requirements of each of the key components of a digital twin and potential implementation approaches.

We will describe how a Digital Twin framework enables multiple models to be joined together to model complex systems as required for atmospheric concentrations around chemical spills or fires modelled by NAME. Overall, we outline a potential project blueprint for future work to improve usability and scientific throughput of existing modelling systems by creating a Digital Twins from core current modelling code and data gathering systems.

How to cite: Haddad, S., Killick, P., Hopkinson, A., Trzeciak, T., Burgoyne, M., and Leadbetter, S.: Exploring the challenges of Digital Twins for weather & climate through an Atmospheric Dispersion modelling prototype, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1662, https://doi.org/10.5194/egusphere-egu22-1662, 2022.

Massive groundwater pumping for agricultural and industrial activities results in significant land subsidence in the arid world. In an acute water crisis, monitoring land subsidence and its key drivers is essential to assist groundwater depletion mitigation strategy. Physical models for aquifer simulation related to land deformation are computationally expensive. The interferometric synthetic aperture radar (InSAR) technique provides precise deformation mapping yet is affected by tropospheric and ionospheric errors. This study explores the capabilities of the deep learning approach coupled with satellite-derived variables in modeling subsidence, spatially and temporally, from 2016 to 2020 and predicting subsidence in the near future by using a recurrent neural network (RNN) in the Shabestar basin, Iran. The basin is part of the Urmia Lake River Basin, embracing 6.4 million people, yet has been primarily desiccated due to the over-usage of water resources in the basin. The deep learning model incorporates InSAR-derived land subsidence and its satellite-based key drivers such as actual evapotranspiration, Normalized Difference Vegetation Index (NDVI), land surface temperature, precipitation to yield the importance of critical drivers to inform groundwater governance. The land deformation in the area varied between -93.2 mm/year to 16 mm/year on average in 2016-2020. Our findings reveal that precipitation, evapotranspiration, and vegetation coverage primarily affected land subsidence; furthermore, the subsidence rate is predicted to increase rapidly. The phenomenon has the same trend with the variation of the Urmia Lake level. This study demonstrates the potential of artificial intelligence incorporating satellite-based ancillary data in land subsidence monitoring and prediction and contributes to future groundwater management.

How to cite: Zhang, Y. and Hashemi, H.: InSAR-Deep learning approach for simulation and prediction of land subsidence in arid regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2011, https://doi.org/10.5194/egusphere-egu22-2011, 2022.

EGU22-2879 | Presentations | ITS2.5/NH10.8

Automatically detecting avalanches with machine learning in optical SPOT6/7 satellite imagery 

Elisabeth D. Hafner, Patrick Barton, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler, and Yves Bühler

Safety related applications like avalanche warning or risk management depend on timely information about avalanche occurrence. Knowledge on the locations and sizes of avalanches releasing is crucial for the responsible decision-makers. Such information is still collected today in a non-systematic way by observes in the field, for example from ski resort patrols or community avalanche services. Consequently, the existing avalanche mapping is, in particular in situations with high avalanche danger, strongly biased towards accessible terrain in proximity to (winter sport) infrastructure.

Recently, remote sensing has been shown to be capable of partly filling this gap, providing spatially continuous information on avalanche occurrences over large regions. In previous work we applied optical SPOT 6/7 satellite imagery to manually map two avalanche periods over a large part of the swiss Alps (2018: 12’500 and 2019: 9’500 km2). Subsequently, we investigated the reliability of this mapping and proved its suitability by identifying almost ¾ of all occurred avalanches (larger size 1) from SPOT 6/7 imagery. Therefore, optical SPOT data is an excellent source for continuous avalanche mapping, currently restricted by the time intensive manual mapping. To speed up this process we now propose a fully convolutional neural network (CNN) called AvaNet. AvaNet is based on a Deeplabv3+ architecture adapted to specifically learn how avalanches look like by explicitly including height information from a digital terrain model (DTM) for example. Relying on the manually mapped 24’737 avalanches for training, validation and testing, AvaNet achieves an F1 score of 62.5% when thresholding the probabilities from the network predictions at 0.5. In this study we present the results from our network in more detail, including different model variations and results of predictions on data from a third avalanche period we did not train on.

The ability to automate the mapping and therefor quickly identify avalanches from satellite imagery is an important step forward in regularly acquiring spatially continuous avalanche occurrence data. This enables the provision of essential information for the complementation of avalanche databases, making Alpine regions safer.

How to cite: Hafner, E. D., Barton, P., Caye Daudt, R., Wegner, J. D., Schindler, K., and Bühler, Y.: Automatically detecting avalanches with machine learning in optical SPOT6/7 satellite imagery, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2879, https://doi.org/10.5194/egusphere-egu22-2879, 2022.

EGU22-3212 | Presentations | ITS2.5/NH10.8

Predicting Landslide Susceptibility in Cross River State of Nigeria using Machine Learning 

Joel Efiong, Devalsam Eni, Josiah Obiefuna, and Sylvia Etu

Landslides have continued to wreck its havoc in many parts of the globe; comprehensive studies of landslide susceptibilities of many of these areas are either lacking or inadequate. Hence, this study was aimed at predicting landslide susceptibility in Cross River State of Nigeria, using machine learning. Precisely, the frequency ratio (FR) model was adopted in this study. In adopting this approach, a landslide inventory map was developed using 72 landslide locations identified during fieldwork combined with other relevant data sources. Using appropriate geostatistical analyst tools within a geographical information environment, the landslide locations were randomly divided into two parts in the ratio of 7:3 for the training and validation processes respectively. A total of 12 landslide causing factors, such as; elevation, slope, aspect, profile curvature, plan curvature, topographic position index, topographic wetness index, stream power index, land use/land cover, geology, distance to waterbody and distance to major roads, were selected and used in the spatial relationship analysis of the factors influencing landslide occurrences in the study area. FR model was then developed using the training sample of the landslide to investigate landslide susceptibility in Cross River State which was subsequently validated. It was found out that the distribution of landslides in Cross River State of Nigeria was largely controlled by a combined effect of geo-environmental factors such as elevation of 250 – 500m, slope gradient of >35o, slopes facing the southwest direction, decreasing degree of both positive and negative curvatures, increasing values of topographic position index, fragile sands, sparse vegetation, especially in settlement and bare surfaces areas, distance to waterbody and major road of < 500m. About 46% of the mapped area was found to be at landslide susceptibility risk zones, ranging from moderate – very high levels. The susceptibility model was validated with 90.90% accuracy. This study has shown a comprehensive investigation of landslide susceptibility in Cross River State which will be useful in land use planning and mitigation measures against landslide induced vulnerability in the study area including extrapolation of the findings to proffer solutions to other areas with similar environmental conditions. This is a novel use of a machine learning technique in hazard susceptibility mapping.

 

Keywords: Landslide; Landslide Susceptibility mapping; Cross River State, Nigeria; Frequency ratio, Machine learning

How to cite: Efiong, J., Eni, D., Obiefuna, J., and Etu, S.: Predicting Landslide Susceptibility in Cross River State of Nigeria using Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3212, https://doi.org/10.5194/egusphere-egu22-3212, 2022.

EGU22-3283 | Presentations | ITS2.5/NH10.8

Assessment of Flood-Damaged Cropland Trends Under Future Climate Scenarios Using Convolutional Neural Network 

Rehenuma Lazin, Xinyi Shen, and Emmanouil Anagnostou

Every year flood causes severe damages in the cropland area leading to global food insecurity. As climate change continues, floods are predicted to be more frequent in the future. To cope with the future climate impacts, mitigate damages, and ensure food security, it is now imperative to study the future flood damage trends in the cropland area. In this study, we use a convolutional neural network (CNN) to estimate the damages (in acre) in the corn and soybean lands across the mid-western USA with projections from climate models. Here, we extend the application of the CNN model developed by Lazin et. al, (2021) that shows ~25% mean relative error for county-level flood-damaged crop loss estimation. The meteorological variables are derived from the reference gridMet datasets as predictors to train the model from 2008-2020. We then use downscaled climate projections from Multivariate Adaptive Constructed Analogs (MACA) dataset in the trained CNN model to assess future flood damage patterns in the cropland in the early (2011-2040), mid (2041-2070), and late (2071-2100) century, relative to the baseline historical period (1981-2010). Results derived from this study will help understand the crop loss trends due to floods under climate change scenarios and plan necessary arrangements to mitigate damages in the future.

 

Reference:

[1] Lazin, R., Shen, X., & Anagnostou, E. (2021). Estimation of flood-damaged cropland area using a convolutional neural network. Environmental Research Letters16(5), 054011.

How to cite: Lazin, R., Shen, X., and Anagnostou, E.: Assessment of Flood-Damaged Cropland Trends Under Future Climate Scenarios Using Convolutional Neural Network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3283, https://doi.org/10.5194/egusphere-egu22-3283, 2022.

EGU22-3422 | Presentations | ITS2.5/NH10.8

Weather history encoding for machine learning-based snow avalanche detection 

Thomas Gölles, Kathrin Lisa Kapper, Stefan Muckenhuber, and Andreas Trügler

Since its start in 2014, the Copernicus Sentinel-1 programme has provided free of charge, weather independent, and high-resolution satellite Earth observations and has set major scientific advances in the detection of snow avalanches from satellite imagery in motion. Recently, operational avalanche detection from Sentinel-1 synthetic Aperture radar (SAR) images were successfully introduced for some test regions in Norway. However, current state of the art avalanche detection algorithms based on machine learning do not include weather history. We propose a novel way to encode weather data and include it into an automatic avalanche detection pipeline for the Austrian Alps. The approach consists of four steps. At first the raw data in netCDF format is downloaded, which consists of several meteorological parameters over several time steps. In the second step the weather data is downscaled onto the pixel locations of the SAR image. Then the data is aggregated over time, which produces a two-dimensional grid of one value per SAR pixel at the time when the SAR data was recorded. This aggregation function can range from simple averages to full snowpack models. In the final step, the grid is then converted to an image with greyscale values corresponding to the aggregated values. The resulting image is then ready to be fed into the machine learning pipeline. We will include this encoded weather history data to increase the avalanche detection performance, and investigate contributing factors with model interpretability tools and explainable artificial intelligence.

How to cite: Gölles, T., Kapper, K. L., Muckenhuber, S., and Trügler, A.: Weather history encoding for machine learning-based snow avalanche detection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3422, https://doi.org/10.5194/egusphere-egu22-3422, 2022.

EGU22-4250 | Presentations | ITS2.5/NH10.8

Landslide Susceptibility Modeling of an Escarpment in Southern Brazil using Artificial Neural Networks as a Baseline for Modeling Triggering Rainfall 

Luísa Vieira Lucchese, Guilherme Garcia de Oliveira, Alexander Brenning, and Olavo Correa Pedrollo

Landslide Susceptibility Mapping (LSM) and rainfall thresholds are well-documented tools used to model the occurrence of rainfall-induced landslides. In the case of locations where only rainfall can be considered a main landslide trigger, both methodologies apply essentially to the same locations, and a model that encompasses both would be an important step towards a better understanding and prediction of landslide-triggering rainfall events. In this research, we employ spatially cross-validated, hyperparameter tuned Artificial Neural Networks (ANNs) to predict the susceptibility to landslides of an area in southern Brazil. In a next step, we plan to add the triggering rainfall to this Artificial Intelligence model, which will concurrently model the susceptibility and the triggering rainfall event for a given area. The ANN is of type Multi-Layer Perceptron with three layers. The number of neurons in the hidden layer was tuned separately for each cross-validation fold, using a method described in previous work. The study area is the escarpment in the limits of the municipalities of Presidente Getúlio, Rio do Sul, and Ibirama, in southern Brazil. For this area, 82 landslides scars related to the event of December 17th, 2020, were mapped. The metrics for each fold are presented and the final susceptibility map for the area is shown and analyzed. The evaluation metrics attained are satisfactory and the resulting susceptibility map highlights the escarpment areas as most susceptible to landslides. The ANN-based susceptibility mapping in the area is considered successful and seen as a baseline for identifying rainfall thresholds in susceptible areas, which will be accomplished with a combined susceptibility and rainfall model in our future work.

How to cite: Vieira Lucchese, L., Garcia de Oliveira, G., Brenning, A., and Correa Pedrollo, O.: Landslide Susceptibility Modeling of an Escarpment in Southern Brazil using Artificial Neural Networks as a Baseline for Modeling Triggering Rainfall, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4250, https://doi.org/10.5194/egusphere-egu22-4250, 2022.

EGU22-4266 | Presentations | ITS2.5/NH10.8

Camera Rain Gauge Based on Artificial Intelligence 

Raffaele Albano, Nicla Notarangelo, Kohin Hirano, and Aurelia Sole

Flood risk monitoring, alert and adaptation in urban areas require near-real-time fine-scale precipitation observations that are challenging to obtain from currently available measurement networks due to their costs and installation difficulties. In this sense, newly available data sources and computational techniques offer enormous potential, in particular, the exploiting of not-specific, widespread, and accessible devices.

This study proposes an unprecedented system for rainfall monitoring based on artificial intelligence, using deep learning for computer vision, applied to cameras images. As opposed to literature, the method is not device-specific and exploits general-purpose cameras (e.g., smartphones, surveillance cameras, dashboard cameras, etc.), in particular, low-cost device, without requiring parameter setting, timeline shots, or videos. Rainfall is measured directly from single photographs through Deep Learning models based on transfer learning with Convolutional Neural Networks. A binary classification algorithm is developed to detect the presence of rain. Moreover, a multi-class classification algorithm is used to estimate a quasi-instantaneous rainfall intensity range. Open data, dash-cams in Japan coupled with high precision multi-parameter radar XRAIN, and experiments in the NIED Large Scale Rainfall Simulator combined to form heterogeneous and verisimilar datasets for training, validation, and test. Finally, a case study over the Matera urban area (Italy) was used to illustrate the potential and limitations of rainfall monitoring using camera-based detectors.

The prototype was deployed in a real-world operational environment using a pre-existent 5G surveillance camera. The results of the binary classifier showed great robustness and portability: the accuracy and F1-score value were 85.28% and 85.13%, 0.86 and 0.85 for test and deployment, respectively, whereas the literature algorithms suffer from drastic accuracy drops changing the image source (e.g. from 91.92% to 18.82%). The 6-way classifier results reached test average accuracy and macro-averaged F1 values of 77.71% and 0.73, presenting the best performances with no-rain and heavy rainfall, which represents critical condition for flood risk. Thus, the results of the tests and the use-case demonstrate the model’s ability to detect a significant meteorological state for early warning systems. The classification can be performed on single pictures taken in disparate lighting conditions by common acquisition devices, i.e. by static or moving cameras without adjusted parameters. This system does not suit scenes that are also misleading for human visual perception. The proposed method features readiness level, cost-effectiveness, and limited operational requirements that allow an easy and quick implementation by exploiting pre-existent devices with a parsimonious use of economic and computational resources.

Altogether, this study corroborates the potential of non-traditional and opportunistic sensing networks for the development of hydrometeorological monitoring systems in urban areas, where traditional measurement methods encounter limitations, and in data-scarce contexts, e.g. where remote-sensed rainfall information is unavailable or has broad resolution respect with the scale of the proposed study. Future research will involve incremental learning algorithms and further data collection via experiments and crowdsourcing, to improve accuracy and at the same time promote public resilience from a smart city perspective.

How to cite: Albano, R., Notarangelo, N., Hirano, K., and Sole, A.: Camera Rain Gauge Based on Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4266, https://doi.org/10.5194/egusphere-egu22-4266, 2022.

EGU22-4730 | Presentations | ITS2.5/NH10.8

floodGAN – A deep learning-based model for rapid urban flood forecasting 

Julian Hofmann and Holger Schüttrumpf

Recent urban flood events revealed how severe and fast the impacts of heavy rainfall can be. Pluvial floods pose an increasing risk to communities worldwide due to ongoing urbanization and changes in climate patterns. Still, pluvial flood warnings are limited to meteorological forecasts or water level monitoring which are insufficient to warn people against the local and terrain-specific flood risks. Therefore, rapid flood models are essential to implement effective and robust early warning systems to mitigate the risk of pluvial flooding. Although hydrodynamic (HD) models are state-of-the-art for simulation pluvial flood hazards, the required computation times are too long for real-time applications.

In order to overcome the computation time bottleneck of HD models, the deep learning model floodGAN has been developed. FloodGAN combines two adversarial Convolutional Neural Networks (CNN) that are trained on high-resolution rainfall-flood data generated from rainfall generators and HD models. FloodGAN translates the flood forecasting problem into an image-to-image translation task whereby the model learns the non-linear spatial relationships of rainfall and hydraulic data. Thus, it directly translates spatially distributed rainfall forecasts into detailed hazard maps within seconds. Next to the inundation depth, the model can predict the velocities and time periods of hydraulic peaks of an upcoming rainfall event. Due to its image-translation approach, the floodGAN model can be applied for large areas and can be run on standard computer systems, fulfilling the tasks of fast and practical flood warning systems.

To evaluate the accuracy and generalization capabilities of the floodGAN model, numerous performance tests were performed using synthetic rainfall events as well as a past heavy rainfall event of 2018. Therefore, the city of Aachen was used as a case study. Performance tests demonstrated a speedup factor of 106 compared to HD models while maintaining high model quality and accuracy and good generalization capabilities for highly variable rainfall events. Improvements can be obtained by integrating recurrent neural network architectures and training with temporal rainfall series to forecast the dynamics of the flooding processes.

How to cite: Hofmann, J. and Schüttrumpf, H.: floodGAN – A deep learning-based model for rapid urban flood forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4730, https://doi.org/10.5194/egusphere-egu22-4730, 2022.

EGU22-4900 | Presentations | ITS2.5/NH10.8

A modular and scalable workflow for data-driven modelling of shallow landslide susceptibility 

Ann-Kathrin Edrich, Anil Yildiz, Ribana Roscher, and Julia Kowalski

The spatial impact of a single shallow landslide is small compared to a deep-seated, impactful failure and hence its damage potential localized and limited. Yet, their higher frequency of occurrence and spatio-temporal correlation in response to external triggering events such as strong precipitation, nevertheless result in dramatic risks for population, infrastructure and environment. It is therefore essential to continuously investigate and analyze the spatial hazard that shallow landslides pose. Its visualisation through regularly-updated, dynamic hazard maps can be used by decision and policy makers. Even though a number of data-driven approaches for shallow landslide hazard mapping exist, a generic workflow has not yet been described. Therefore, we introduce a scalable and modular machine learning-based workflow for shallow landslide hazard prediction in this study. The scientific test case for the development of the workflow investigates the rainfall-triggered shallow landslide hazard in Switzerland. A benchmark dataset was compiled based on a historic landslide database as presence data, as well as a pseudo-random choice of absence locations, to train the data-driven model. Features included in this dataset comprise at the current stage 14 parameters from topography, soil type, land cover and hydrology. This work also focuses on the investigation of a suitable approach to choose absence locations and the influence of this choice on the predicted hazard as their influence is not comprehensively studied. We aim at enabling time-dependent and dynamic hazard mapping by incorporating time-dependent precipitation data into the training dataset with static features. Inclusion of temporal trigger factors, i.e. rainfall, enables a regularly-updated landslide hazard map based on the precipitation forecast. Our approach includes the investigation of a suitable precipitation metric for the occurrence of shallow landslides at the absence locations based on the statistical evaluation of the precipitation behavior at the presence locations. In this presentation, we will describe the modular workflow as well as the benchmark dataset and show preliminary results including above mentioned approaches to handle absence locations and time-dependent data.

How to cite: Edrich, A.-K., Yildiz, A., Roscher, R., and Kowalski, J.: A modular and scalable workflow for data-driven modelling of shallow landslide susceptibility, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4900, https://doi.org/10.5194/egusphere-egu22-4900, 2022.

EGU22-6568 | Presentations | ITS2.5/NH10.8

Harnessing Machine Learning and Deep Learning applications for climate change risk assessment: a survey 

Davide Mauro Ferrario, Elisa Furlan, Silvia Torresan, Margherita Maraschini, and Andrea Critto

In the last years there has been a growing interest around Machine Learning (ML) in climate risk/ multi-risk assessment, steered mainly by the growing amount of data available and the reduction of associated computational costs. Extracting information from spatio-temporal data is critically important for problems such as extreme events forecasting and assessing risks and impacts from multiple hazards. Typical challenges in which AI and ML are now being applied require understanding the dynamics of complex systems, which involve many features with non-linear relations and feedback loops, analysing the effects of phenomena happening at different time scales, such as slow-onset events (sea level rise) and short-term episodic events (storm surges, floods) and estimating uncertainties of long-term predictions and scenarios. 
While in the last years there were many successful applications of AI/ML, such as Random Forest or Long-Short Term Memory (LSTM) in floods and storm surges risk assessment, there are still open questions and challenges that need to be addressed. In fact, there is a lack of data for extreme events and Deep Learning (DL) algorithms often need huge amounts of information to disentangle the relationships among hazard, exposure and vulnerability factors contributing to the occurrence of risks. Moreover, the spatio-temporal resolution can be highly irregular and need to be reconstructed to produce accurate and efficient models. For example, using data from meteorological ground stations can offer accurate datasets with fine temporal resolution, but with an irregular distribution in the spatial dimension; on the other hand, leveraging on satellite images can give access to more spatially refined data, but often lacking the temporal dimension (fewer events available to due atmospheric disturbances). 
Several techniques have been applied, ranging from classical multi-step forecasting, state-space and Hidden Markov models to DL techniques, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). ANN and Deep Generative Models (DGM) have been used to reconstruct spatio-temporal grids and modelling continuous time-series, CNN to exploit spatial relations, Graph Neural Networks (GNN) to extract multi-scale localized spatial feature and RNN and LSTM for multi-scale time series prediction.  
To bridge these gaps, an in-depth state-of-the-art review of the mathematical and computer science innovations in ML/DL techniques that could be applied to climate /multi-risk assessment was undertaken. The review focuses on three possible ML/DL applications: analysis of spatio-temporal dynamics of risk factors, with particular attention on applications for irregular spatio-temporal grids; multivariate analysis for multi-hazard interactions and multiple risk assessment endpoints; analysis of future scenarios under climate change. We will present the main outcomes of the scientometric and systematic review of publications across the 2000- 2021 timeframe, which allowed us to: i) summarize keywords and word co-occurrence networks, ii) highlight linkages, working relations and co-citation clusters, iii) compare ML and DL approaches with classical statistical techniques and iv) explore applications at the forefront of the risk assessment community.

How to cite: Ferrario, D. M., Furlan, E., Torresan, S., Maraschini, M., and Critto, A.: Harnessing Machine Learning and Deep Learning applications for climate change risk assessment: a survey, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6568, https://doi.org/10.5194/egusphere-egu22-6568, 2022.

EGU22-6576 | Presentations | ITS2.5/NH10.8

Swept Away: Flooding and landslides in Mexican poverty nodes 

Silvia García, Raul Aquino, and Walter Mata

Natural disasters should be examined within a risk-perspective framework where both natural threat and vulnerability are considered as intricate components of an extremely complex equation. The trend toward more frequent floods and landslides in Mexico in recent decades is not only the result of more intense rainfall, but also a consequence of increased vulnerability. As a multifactorial element, vulnerability is a low-frequency modulating factor of the risk dynamics to intense rainfall. It can be described in terms of physical, social, and economical factors. For instance, deforested or urbanized areas are the physical and social factors that lead to the deterioration of watersheds and an increased vulnerability to intense rains. Increased watershed vulnerability due to land-cover changes is the primary factor leading to more floods, particularly over pacific Mexico. ln some parts of the country, such as Colima, the increased frequency of intense rainfall (i.e., natural hazard) associated with high-intensity tropical cyclones and hurricanes is the leading cause of more frequent floods.

 

In this research an intelligent rain management-system is presented. The object is built to forecast and to simulate the components of risk, to stablish communication between rescue/aid teams and to help in preparedness activities (training). Detection, monitoring, analysis and forecasting of the hazards and scenarios that promote floods and landslides, is the main task. The developed methodology is based on a database that permits to relate heavy rainfall measurements with changes in land cover and use, terrain slope, basin compactness and communities’ resilience as key vulnerability factors. A neural procedure is used for the spatial definition of exposition and susceptibility (intrinsic and extrinsic parameters) and Machine Learning techniques are applied to find the If-Then relationships. The capability of the intelligent model for Colima, Mexico was tested by comparing the observed and modeled frequency of landslides and floods for ten years period. It was found that over most of the Mexican territory, more frequent floods are the result of a rapid deforestation process and that landslides and their impact on communities are directly related to the unauthorized growth of populations in high geo-risk areas (due to forced migration because of violence or extreme poverty) and the development of civil infrastructure (mainly roads) with a high impact on the natural environment. Consequently, the intelligent rain-management system offers the possibility to redesign and to plan the land use and the spatial distribution of poorest communities.

How to cite: García, S., Aquino, R., and Mata, W.: Swept Away: Flooding and landslides in Mexican poverty nodes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6576, https://doi.org/10.5194/egusphere-egu22-6576, 2022.

EGU22-6690 | Presentations | ITS2.5/NH10.8

A machine learning-based ensemble model for estimation of seawater quality parameters in coastal area 

Xiaotong Zhu, Jinhui Jeanne Huang, Hongwei Guo, Shang Tian, and Zijie Zhang

The precise estimation of seawater quality parameters is crucial for decision-makers to manage coastal water resources. Although various machine learning (ML)-based algorithms have been developed for seawater quality retrieval using remote sensing technology, the performance of these models in the application of specific regions remains significant uncertainty due to the different properties of coastal waters. Moreover, the prediction results of these ML models are unexplainable. To address these problems, an ML-based ensemble model was developed in this study. The model was applied to estimate chlorophyll-a (Chla), turbidity, and dissolved oxygen (DO) based on Sentinel-2 satellite imagery in Shenzhen Bay, China. The optimal input features for each seawater quality parameter were selected from the nine simulation scenarios which generated from eight spectral bands and six spectral indices. A local explanation method called SHapley Additive exPlanations (SHAP) was introduced to quantify the contributions of various features to the predictions of the seawater quality parameters. The results suggested that the ensemble model with feature selection enhanced the performance for three types of seawater quality parameters estimations (The errors were 1.7%, 1.5%, and 0.02% for Chla, turbidity, and DO, respectively). Furthermore, the reliability of the model performance was further verified for mapping the spatial distributions of water quality parameters during the model validation period. The spatial-temporal patterns of seawater quality parameters revealed that the distributions of seawater quality were mainly influenced by estuary input. Correlation analysis demonstrated that air temperature (Temp) and average air pressure (AAP) exhibited the closest relationship with Chla. The DO was most relevant with Temp, and turbidity was not sensitive to Temp, average wind speed (AWS), and AAP. This study enhanced the prediction capability of seawater quality parameters and provided a scientific coastal waters management approach for decision-makers.

How to cite: Zhu, X., Huang, J. J., Guo, H., Tian, S., and Zhang, Z.: A machine learning-based ensemble model for estimation of seawater quality parameters in coastal area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6690, https://doi.org/10.5194/egusphere-egu22-6690, 2022.

EGU22-6758 | Presentations | ITS2.5/NH10.8

AI-enhanced Integrated Alert System for effective Disaster Management 

Pankaj Kumar Dalela, Saurabh Basu, Sandeep Sharma, Anugandula Naveen Kumar, Suvam Suvabrata Behera, and Rajkumar Upadhyay

Effective communication systems supported by Information and Communication Technologies (ICTs) are integral and important components for ensuring comprehensive disaster management. Continuous warning monitoring, prediction, dissemination, and response coordination along with public engagement by utilizing the capabilities of emerging technologies including Artificial Intelligence (AI) can assist in building resilience and ensuring Disaster Risk Reduction. Thus, for effective disaster management, an Integrated Alert System is proposed which encapsulates all concerned disaster management authorities, alert forecasting and disseminating agencies under a single umbrella for alerting the targeted public through various communication channels. Enhancing the capabilities of the system through AI, its integral part includes the data-driven citizen-centric Decision Support System which can help disaster managers by performing complete impact assessment of disaster events through configuration of decision models developed by learning inter-relationships of different parameters. The system needs to be capable of identification of possible communication means to address community outreach, prediction of scope of alert, providing influence of alert message on targeted vulnerable population, performing crowdsourced data analysis, evaluating disaster impact through threat maps and dashboards, and thereby, providing complete analysis of the disaster event in all phases of disaster management. The system aims to address challenges including limited communication channels utilization and audience reach, language differences, and lack of ground information in decision making posed by current systems by utilizing the latest state of art technologies.

How to cite: Dalela, P. K., Basu, S., Sharma, S., Kumar, A. N., Behera, S. S., and Upadhyay, R.: AI-enhanced Integrated Alert System for effective Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6758, https://doi.org/10.5194/egusphere-egu22-6758, 2022.

Main purpose of current research article is to present latest findings on automatic methods of manipulating social network data for developing seismic intensity maps. As case study the author selected the 2020 Samos earthquake event (Mw= 7, 30 October 2020, Greece). That earthquake event had significant consequences to the urban environment along with 2 deaths and 19 injuries. Initially an automatic approach, presented recently in the international literature was applied producing thus seismic intensity maps from tweets. Furthermore, some initial findings regarding the use of machine learning in various parts of the automatic methodology were presented along with potential of using photos posted in social networks. The data used were several thousands tweets and instagram posts.The results, provide vital findings in enriching data sources, data types, and effective rapid processing.

How to cite: Arapostathis, S. G.: The Samos earthquake event (Mw = 7, 30 October 2020, Greece) as case study for applying machine learning on texts and photos scraped from social networks for developing seismic intensity maps., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7129, https://doi.org/10.5194/egusphere-egu22-7129, 2022.

EGU22-7308 | Presentations | ITS2.5/NH10.8

Building an InSAR-based database to support geohazard risk management by exploiting large ground deformation datasets 

Marta Béjar-Pizarro, Pablo Ezquerro, Carolina Guardiola-Albert, Héctor Aguilera Alonso, Margarita Patricia Sanabria Pabón, Oriol Monserrat, Anna Barra, Cristina Reyes-Carmona, Rosa Maria Mateos, Juan Carlos García López Davalillo, Juan López Vinielles, Guadalupe Bru, Roberto Sarro, Jorge Pedro Galve, Roberto Tomás, Virginia Rodríguez Gómez, Joaquín Mulas de la Peña, and Gerardo Herrera

The detection of areas of the Earth’s surface experiencing active deformation processes and the identification of the responsible phenomena (e.g. landslides activated after rainy events, subsidence due to groundwater extraction in agricultural areas, consolidation settlements, instabilities in active or abandoned mines) is critical for geohazard risk management and ultimately to mitigate the unwanted effects on the affected populations and the environment.

This will now be possible at European level thanks to the Copernicus European Ground Motion Service (EGMS), which will provide ground displacement measurements derived from time series analyses of Sentinel-1 data, using Interferometric Synthetic Aperture Radar (InSAR). The EGMS, which will be available to users in the first quarter of 2022 and will be updated annually, will be especially useful to identify displacements associated to landslides, subsidence and deformation of infrastructure.  To fully exploit the capabilities of this large InSAR datasets, it is fundamental to develop automatic analysis tools, such as machine learning algorithms, which require an InSAR-derived deformation database to train and improve them.  

Here we present the preliminary InSAR-derived deformation database developed in the framework of the SARAI project, which incorporates the previous InSAR results of the IGME-InSARlab and CTTC teams in Spain. The database contains classified points of measurement with the associated InSAR deformation and a set of environmental variables potentially correlated with the deformation phenomena, such as geology/lithology, land-surface slope, land cover, meteorological data, population density, and inventories such as the mining registry, the groundwater database, and the IGME’s land movements database (MOVES). We discuss the main strategies used to identify and classify pixels and areas that are moving, the covariables used and some ideas to improve the database in the future. This work has been developed in the framework of project PID2020-116540RB-C22 funded by MCIN/ AEI /10.13039/501100011033 and e-Shape project, with funding from the European Union’s Horizon 2020 research and innovation program under grant agreement 820852.

How to cite: Béjar-Pizarro, M., Ezquerro, P., Guardiola-Albert, C., Aguilera Alonso, H., Sanabria Pabón, M. P., Monserrat, O., Barra, A., Reyes-Carmona, C., Mateos, R. M., García López Davalillo, J. C., López Vinielles, J., Bru, G., Sarro, R., Galve, J. P., Tomás, R., Rodríguez Gómez, V., Mulas de la Peña, J., and Herrera, G.: Building an InSAR-based database to support geohazard risk management by exploiting large ground deformation datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7308, https://doi.org/10.5194/egusphere-egu22-7308, 2022.

EGU22-7313 | Presentations | ITS2.5/NH10.8

The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach 

Kathrin Lisa Kapper, Stefan Muckenhuber, Thomas Goelles, Andreas Trügler, Muhamed Kuric, Jakob Abermann, Jakob Grahn, Eirik Malnes, and Wolfgang Schöner

Each year, snow avalanches cause many casualties and tremendous damage to infrastructure. Prevention and mitigation mechanisms for avalanches are established for specific regions only. However, the full extent of the overall avalanche activity is usually barely known as avalanches occur in remote areas making in-situ observations scarce. To overcome these challenges, an automated avalanche detection approach using the Copernicus Sentinel-1 synthetic aperture radar (SAR) data has recently been introduced for some test regions in Norway. This automated detection approach from SAR images is faster and gives more comprehensive results than field-based detection provided by avalanche experts. The Sentinel-1 programme has provided - and continues to provide - free of charge, weather-independent, and high-resolution satellite Earth observations since its start in 2014. Recent advances in avalanche detection use deep learning algorithms to improve the detection rates. Consequently, the performance potential and the availability of reliable training data make learning-based approaches an appealing option for avalanche detection.  

         In the framework of the exploratory project SnowAV_AT, we intend to build the basis for a state-of-the-art automated avalanche detection system for the Austrian Alps, including a "best practice" data processing pipeline and a learning-based approach applied to Sentinel-1 SAR images. As a first step towards this goal, we have compiled several labelled training datasets of previously detected avalanches that can be used for learning. Concretely, these datasets contain 19000 avalanches that occurred during a large event in Switzerland in January 2018, around 6000 avalanches that occurred in Switzerland in January 2019, and around 800 avalanches that occurred in Greenland in April 2016. The avalanche detection performance of our learning-based approach will be quantitatively evaluated against held-out test sets. Furthermore, we will provide qualitative evaluations using SAR images of the Austrian Alps to gauge how well our approach generalizes to unseen data that is potentially differently distributed than the training data. In addition, selected ground truth data from Switzerland, Greenland and Austria will allow us to validate the accuracy of the detection approach. As a particular novelty of our work, we will try to leverage high-resolution weather data and combine it with SAR images to improve the detection performance. Moreover, we will assess the possibilities of learning-based approaches in the context of the arguably more challenging avalanche forecasting problem.

How to cite: Kapper, K. L., Muckenhuber, S., Goelles, T., Trügler, A., Kuric, M., Abermann, J., Grahn, J., Malnes, E., and Schöner, W.: The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7313, https://doi.org/10.5194/egusphere-egu22-7313, 2022.

Flood events cause substantial damage to infrastructure and disrupt livelihoods. There is a need for the development of an innovative, open-access and real-time disaster map pipeline which is automatically initiated at the time of a flood event to highlight flooded regions, potential damage and vulnerable communities. This can help in directing resources appropriately during and after a disaster to reduce disaster risk. To implement this pipeline, we explored the integration of three heterogeneous data sources which include remote sensing data, social sensing data and geospatial sensing data to guide disaster relief and response. Remote sensing through satellite imagery is an effective method to identify flooded areas where we utilized existing deep learning models to develop a pipeline to process both optical and radar imagery. Whilst this can offer situational awareness right after a disaster, satellite-based flood extent maps lack important contextual information about the severity of structural damage or urgent needs of affected population. This is where the potential of social sensing through microblogging sites comes into play as it provides insights directly from eyewitnesses and affected people in real-time. Whilst social sensing data is advantageous, these streams are usually extremely noisy where there is a need to build disaster relevant taxonomies for both text and images. To develop a disaster taxonomy for social media texts, we conducted literature review to better understand stakeholder information needs. The final taxonomy consisted of 30 categories organized among three high-level classes. This built taxonomy was then used to label a large number of tweet texts (~ 10,000) to train machine learning classifiers so that only relevant social media texts are visualized on the disaster map. Moreover, a disaster object taxonomy for social media images was developed in collaboration with a certified emergency manager and trained volunteers from Montgomery County, MD Community Emergency Response Team. In total, 106 object categories were identified and organized as a hierarchical  taxonomy with  three high-level classes and 10 sub-classes. This built taxonomy will be used to label a large set of disaster images for object detection so that machine learning classifiers can be trained to effectively detect disaster relevant objects in social media imagery. The wide perspective provided by the satellite view combined with the ground-level perspective from locally collected textual and visual information helped us in identifying three types of signals: (i) confirmatory signals from both sources, which puts greater confidence that a specific region is flooded, (ii) complementary signals that provide different contextual information including needs and requests, disaster impact or damage reports and situational information, and (iii) novel signals when both data sources do not overlap and provide unique information. We plan to fuse the third component, geospatial sensing, to perform flood vulnerability analysis to allow easy identification of areas/zones that are most vulnerable to flooding. Thus, the fusion of remote sensing, social sensing and geospatial sensing for rapid flood mapping can be a powerful tool for crisis responders.

How to cite: Ofli, F., Akhtar, Z., Sadiq, R., and Imran, M.: Triangulation of remote sensing, social sensing, and geospatial sensing for flood mapping, damage estimation, and vulnerability assessment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7561, https://doi.org/10.5194/egusphere-egu22-7561, 2022.

EGU22-7711 | Presentations | ITS2.5/NH10.8

Global sensitivity analyses to characterize the risk of earth fissures in subsiding basins 

Yueting Li, Claudia Zoccarato, Noemi Friedman, András Benczúr, and Pietro Teatini

Earth fissure associated with groundwater pumping is a severe geohazard jeopardizing several subsiding basins generally in arid countries (e.g., Mexico, Arizona, Iran, China, Pakistan). Up to 15 km long, 1–2 m wide, 15–20 m deep, and more than 2 m vertically dislocated fissures have been reported. A common geological condition favoring the occurrence of earth fissures is the presence of shallow bedrock ridge buried by compacting sedimentary deposits. This study aims to improve the understanding of this mechanism by evaluating the effects of various factors on the risk of fissure formation and development. Several parameters playing a role in the fissure occurrence have been considered, such as the shape of the bedrock ridge, the aquifer thickness, the pressure depletion in the aquifer system, and its compressibility. A realistic case is developed where the characteristics of fissure like displacements and stresses are quantified with aid of a numerical approach based on finite elements for the continuum and interface elements for the discretization of the fissures. Modelling results show that the presence of bedrock ridge causes tension accumulation around its tip and results in fissure opening from land surface downward after long term piezometry depletion. Different global sensitivity analysis methods are applied to measure the importance of each single factor (or group of them) on the quantity of interest, i.e., the fissure opening. A conventional variance-based method is first presented with Sobol indices computed from Monte Carlo simulations, although its accuracy is only guaranteed with a high number of forward simulations. As alternatives, generalized polynomial chaos expansion and gradient boosting tree are introduced to approximate the forward model and implement the corresponding sensitivity assessment at a significantly reduced computational cost. All the measures provide similar results that highlight the importance of bedrock ridge in earth fissuring. Generally, the steeper bedrock ridge the higher the risk of significant fissure opening. Pore pressure depletion is secondarily key factor which is essential for fissure formation.

How to cite: Li, Y., Zoccarato, C., Friedman, N., Benczúr, A., and Teatini, P.: Global sensitivity analyses to characterize the risk of earth fissures in subsiding basins, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7711, https://doi.org/10.5194/egusphere-egu22-7711, 2022.

Induced subsidence and seismicity caused by the production of hydrocarbons in the Groningen gas field (the Netherlands) is a widely known issue facing this naturally aseismic region (Smith et al., 2019). Extraction reduces pore-fluid pressure leading to accumulation of small elastic and inelastic strains and an increase in effective vertical stress driving compaction of reservoir sandstones.

Recent studies (Pijnenburg et al., 2019a, b and Verberne et al., 2021) identify grain-scale deformation of intergranular and grain-coating clays as largely responsible for accommodating (permanent) inelastic deformation at small strains relevant to production (≤1.0%). However, their distribution, microstructure, abundance, and contribution to inelastic deformation remains unconstrained, presenting challenges when evaluating grain-scale deformation mechanisms within a natural system. Traditional methods of mineral identification are costly, labor-intensive, and time-consuming. Digital imaging coupled with machine-learning-driven segmentation is necessary to accelerate the identification of clay microstructures and distributions within reservoir sandstones for later large-scale analysis and geomechanical modeling.

We performed digital imaging on thin-sections taken from core recovered from the highly-depleted Zeerijp ZRP-3a well located at the most seismogenic part of the field. The core was kindly made available by the field operator, NAM. Optical digital images were acquired using the Zeiss AxioScan optical light microscope at 10x magnification with a resolution of 0.44µm and compared to backscattered electron (BSE) digital images from the Zeiss EVO 15 Scanning Electron Microscope (SEM) at varying magnifications with resolutions ranging from 0.09µm - 2.24 µm. Digital images were processed in ilastik, an interactive machine-learning-based toolkit for image segmentation that uses a Random Forest classifier to separate clays from a digital image (Berg et al., 2019).

Comparisons between segmented optical and BSE digital images indicate that image resolution is the main limiting factor for successful mineral identification and image segmentation, especially for clay minerals. Lower resolution digital images obtained using optical light microscopy may be sufficient to segment larger intergranular/pore-filling clays, but higher resolution BSE images are necessary to segment smaller micron to submicron-sized grain-coating clays. Comparing the same segmented optical image (~11.5% clay) versus BSE image (~16.3% clay) reveals an error of ~30%, illustrating the potential of underestimating the clay content necessary for geomechanical modeling.

Our analysis shows that coupled automated electron microscopy with machine-learning-driven image segmentation has the potential to provide statistically relevant and robust information to further constrain the role of clay films on the compaction behavior of reservoir rocks.

 

References:

Berg, S. et al., Nat Methods 16, 1226–1232 (2019).

(NAM) Nederlandse Aardolie Maatschappij BV (2015).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124 (2019a).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124, 5254–5282. (2019b)

Smith, J. D. et al., Journal of Geophysical Research: Solid Earth, 124, 6165–6178. (2019)

Verberne, B. A. et al., Geology, 49 (5): 483–487. (2020)

How to cite: Vogel, H., Amiri, H., Plümper, O., Hangx, S., and Drury, M.: Applications of digital imaging coupled with machine-learning for aiding the identification, analysis, and quantification of intergranular and grain-coating clays within reservoirs rocks., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7915, https://doi.org/10.5194/egusphere-egu22-7915, 2022.

EGU22-9406 | Presentations | ITS2.5/NH10.8

Building exposure datasets using street-level imagery and deep learning object detection models 

Luigi Cesarini, Rui Figueiredo, Xavier Romão, and Mario Martina

The built environment is constantly under the threat of natural hazards, and climate change will only exacerbate such perils. The assessment of natural hazard risk requires exposure models representing the characteristics of the assets at risk, which are crucial to subsequently estimate damage and impacts of a given hazard to such assets. Studies addressing exposure assessment are expanding, in particular due to technological progress. In fact, several works are introducing data collected from volunteered geographic information (VGI), user-generated content, and remote sensing data. Although these methods generate large amounts of data, they typically require a time-consuming extraction of the necessary information. Deep learning models are particularly well suited to perform this labour-intensive task due to their ability to handle massive amount of data.

In this context, this work proposes a methodology that connects VGI obtained from OpenStreetMap (OSM), street-level imagery from Google Street View (GSV) and deep learning object detection models to create an exposure dataset of electrical transmission towers, an asset particularly vulnerable to strong winds among other perils (i.e., ice loads and earthquakes). The main objective of the study is to establish and demonstrate a complete pipeline that first obtains the locations of transmission towers from the power grid layer of OSM’s world infrastructure, and subsequently assigns relevant features of each tower based on the classification returned from an object detection model over street-level imagery of the tower, obtained from GSV.

The study area for the initial application of the methodology is the Porto district (Portugal), which has an area of around 1360 km2 and 5789 transmission towers. The area was found to be representative given its diverse land use, containing both densely populated settlements and rural areas, and the different types of towers that can be found. A single-stage detector (YOLOv5) and a two-stage detector (Detectron2) were trained and used to perform identification and classification of towers. The first task was used to test the ability of a model to recognize whether a tower is present in an image, while the second task assigned a category to each tower based on a taxonomy derived from a compilation of the most used type of towers. Preliminary results on the test partition of the dataset are promising. For the identification task, YOLOv5 returned a mean average precision (mAP) of 87% for an intersection over union (IoU) of 50%, while Detectron2 reached a mAP of 91% for the same IoU. In the classification problem, the performances were also satisfactory, particularly when the models were trained on a sufficient number of images per class. 

Additional analyses of the results can provide insights on the types of areas for which the methodology is more reliable. For example, in remote areas, the long distance of a tower to the street might prevent the object to be identified in the image. Nevertheless, the proposed methodology can in principle be used to generate exposure models of transmission towers at large spatial scales in areas for which the necessary datasets are available.

 

How to cite: Cesarini, L., Figueiredo, R., Romão, X., and Martina, M.: Building exposure datasets using street-level imagery and deep learning object detection models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9406, https://doi.org/10.5194/egusphere-egu22-9406, 2022.

EGU22-10276 | Presentations | ITS2.5/NH10.8

Weather and climate in the AI-supported early warning system DAKI-FWS 

Elena Xoplaki, Andrea Toreti, Florian Ellsäßer, Muralidhar Adakudlu, Eva Hartmann, Niklas Luther, Johannes Damster, Kim Giebenhain, Andrej Ceglar, and Jackie Ma

The project DAKI-FWS (BMWi joint-project “Data and AI-supported early warning system to stabilise the German economy”; German: “Daten- und KI-gestütztes Frühwarnsystem zur Stabilisierung der deutschen Wirtschaft”) develops an early warning system (EWS) to strengthen economic resilience in Germany. The EWS enables better characterization of the development and course of pandemics or hazardous climate extreme events and can thus protect and support lives, jobs, land and infrastructures.

The weather and climate modules of the DAKI-FWS use state-of-the-art seasonal forecasts for Germany and apply innovative AI-approaches to prepare very high spatial resolution simulations. These are used for the climate-related practical applications of the project, such as pandemics or subtropical/tropical diseases, and contribute to the estimation of the outbreak and evolution of health crises. Further, the weather modules of the EWS objectively identify weather and climate extremes, such as heat waves, storms and droughts, as well as compound extremes from a large pool of key data sets. The innovative project work is complemented by the development and AI-enhancement of the European Flood Awareness System model, LISFLOOD, and forecasting system for Germany at very high spatial resolution. The model combined with the high-end output of the seasonal forecast prepares high-resolution, accurate flood risk assessment. The final output of the EWS and hazard maps not only support adaptation, but they also increase preparedness providing a time horizon of several months ahead, thus increasing the resilience of economic sectors to impacts of the ongoing anthropogenic climate change. The weather and climate modules of the EWS provide economic, political, and administrative decision-makers and the general public with evidence on the probability of occurrence, intensity and spatial and temporal extent of extreme events as well as with critical information during a disaster.

How to cite: Xoplaki, E., Toreti, A., Ellsäßer, F., Adakudlu, M., Hartmann, E., Luther, N., Damster, J., Giebenhain, K., Ceglar, A., and Ma, J.: Weather and climate in the AI-supported early warning system DAKI-FWS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10276, https://doi.org/10.5194/egusphere-egu22-10276, 2022.

Landslide inventories are essential for landslide susceptibility mapping, hazard modelling, and further risk mitigation management. For decades, experts and organisations worldwide have preferred manual visual interpretation of satellite and aerial images. However, there are various problems associated with manual inventories, such as manual extraction of landslide borders and their representation with polygons, which is a subjective process.  Manual delineation is affected by the applied methodology, the preferences of the experts and interpreters, and how much time and effort are invested in the inventory generating process. In recent years, a vast amount of research related to semi-automated and automatic mapping of landslide inventories has been carried out to overcome these issues. The automatic generation of landslide inventories using Artificial Intelligence (AI) techniques is still in its early phase as currently there is no published research that can create a ground truth representation of landslide situation after a landslide triggering event. The evaluation metrics in recent literature show a range of 50-80% of F1-score in terms of landslide boundary delineation using AI-based models. However, very few studies claim to have achieved more than 80% F1 score with the exception of those employing the testing of their model evaluation in the same study area. Therefore, there is still a research gap between the generation of AI-based landslide inventories and their usability for landslide hazard and risk studies. In this study, we explore several inventories developed by AI and manual delineation and test their usability for assessing landslide hazard.

How to cite: Meena, S. R., Floris, M., and Catani, F.: Can landslide inventories developed by artificial intelligence substitute manually delineated inventories for landslide hazard and risk studies?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11422, https://doi.org/10.5194/egusphere-egu22-11422, 2022.

EGU22-11787 | Presentations | ITS2.5/NH10.8

Explainable deep learning for wildfire danger estimation 

Michele Ronco, Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, and Nuno Carvalhais

Deep learning models have been remarkably successful in a number of different fields, yet their application to disaster management is obstructed by the lack of transparency and trust which characterises artificial neural networks. This is particularly relevant in the field of Earth sciences where fitting is only a tiny part of the problem, and process understanding becomes more relevant [1,2]. In this regard, plenty of eXplainable Artificial Intelligence (XAI) algorithms have been proposed in the literature over the past few years [3]. We suggest that combining saliency maps with interpretable approximations, such as LIME, is useful to extract complementary insights and reach robust explanations. We address the problem of wildfire forecasting for which interpreting the model's predictions is of crucial importance to put into action effective mitigation strategies. Daily risk maps have been obtained by training a convolutional LSTM with ten years of data of spatio-temporal features, including weather variables, remote sensing indices and static layers for land characteristics [4]. We show how the usage of XAI allows us to interpret the predicted fire danger, thereby shortening the gap between black-box approaches and disaster management.

 

[1] Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences

Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors)

Wiley \& Sons 2021

[2] Deep learning and process understanding for data-driven Earth System Science

Reichstein, M. and Camps-Valls, G. and Stevens, B. and Denzler, J. and Carvalhais, N. and Jung, M. and Prabhat

Nature 566 :195-204, 2019

[3] Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

 Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller (Editors)

LNCS, volume 11700, Springer 

[4] Deep Learning Methods for Daily Wildfire Danger Forecasting

Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Michele Ronco, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, Nuno Carvalhais

arXiv: 2111.02736


 

How to cite: Ronco, M., Prapas, I., Kondylatos, S., Papoutsis, I., Camps-Valls, G., Fernández-Torres, M.-Á., Piles Guillem, M., and Carvalhais, N.: Explainable deep learning for wildfire danger estimation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11787, https://doi.org/10.5194/egusphere-egu22-11787, 2022.

EGU22-11872 | Presentations | ITS2.5/NH10.8

Recent Advances in Deep Learning for Spatio-Temporal Drought Monitoring, Forecasting and Model Understanding 

María González-Calabuig, Jordi Cortés-Andrés, Miguel-Ángel Fernández-Torres, and Gustau Camps-Valls

Droughts constitute one of the costliest natural hazards and have seriously destructive effects on the ecological environment, agricultural production and socio-economic conditions. Their elusive and subjective definition, due to the complex physical, chemical and biological processes of the Earth system they involve, makes their management an arduous challenge to researchers, as well as decision and policy makers. We present here our most recent advances in machine learning models in three complementary lines of research about droughts: monitoring, forecasting and understanding. While monitoring or detection is about gaining the time series of drought maps and discovering underlying patterns and correlations, forecasting or prediction is to anticipate future droughts. Last but not least, understanding or explaining models by means of expert-comprehensible representations is equally important as accurately addressing these tasks, especially for their deployment in real scenarios. Thanks to the emergence and success of deep learning, all of these tasks can be tackled by the design of spatio-temporal data-driven approaches built on the basis of climate variables (soil moisture, precipitation, temperature, vegetation health, etc.) and/or satellite imagery. The possibilities are endless, from the design of convolutional architectures and attention mechanisms to the use of generative models such as Normalizing Flows (NFs) or Generative Adversarial Networks (GANs), trained both in a supervised and unsupervised manner, among others. Different application examples in Europe from 2003 onwards are provided, with the aim of reflecting on the possibilities of the strategies proposed, and also of foreseeing alternatives and future lines of development. For that purpose, we make use of several mesoscale (1 km) spatial and 8 days temporal resolution variables included in the Earth System Data Cube (ESDC) [Mahecha et al., 2020] for drought detection, while high resolution (20 m, 5 days) Sentinel-2 data cubes, extracted from the extreme summer track in EarthNet2021 [Requena-Mesa et al., 2021], are considered for forecasting.

 

References

Mahecha, M. D., Gans, F., Brandt, G., Christiansen, R., Cornell, S. E., Fomferra, N., ... & Reichstein, M. (2020). Earth system data cubes unravel global multivariate dynamics. Earth System Dynamics, 11(1), 201-234.

Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).

How to cite: González-Calabuig, M., Cortés-Andrés, J., Fernández-Torres, M.-Á., and Camps-Valls, G.: Recent Advances in Deep Learning for Spatio-Temporal Drought Monitoring, Forecasting and Model Understanding, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11872, https://doi.org/10.5194/egusphere-egu22-11872, 2022.

EGU22-12432 | Presentations | ITS2.5/NH10.8

Building wildfire intelligence at the edge: bridging the gap from development to deployment 

Maria João Sousa, Alexandra Moutinho, and Miguel Almeida

The increased frequency, intensity, and severity of wildfire events in several regions across the world has highlighted several disaster response infrastructure hindrances that call for enhanced intelligence gathering pipelines. In this context, the interest in the use of unmanned aerial vehicles for surveillance and active fire monitoring has been growing in recent years. However, several roadblocks challenge the implementation of these solutions due to their high autonomy requirements and energy-constrained nature. For these reasons, the artificial intelligence development focus on large models hampers the development of models suitable for deployment onboard these platforms. In that sense, while artificial intelligence approaches can be an enabling technology that can effectively scale real-time monitoring services and optimize emergency response resources, the design of these systems imposes: (i) data requirements, (ii) computing constraints and (iii) communications limitations. Here, we propose a decentralized approach, reflecting upon these three vectors.

Data-driven artificial intelligence is central to both handle multimodal sensor data in real-time and to annotate large amounts of data collected, which are necessary to build robust safety-critical monitoring systems. Nevertheless, these two objectives have distinct implications computation-wise, because the first must happen on-board, whereas the second can leverage higher processing capabilities off-board. While autonomy of robotic platforms drives mission performance, being a key reason for the need for edge computing of onboard sensor data, the communications design is essential to mission endurance as relaying large amounts of data in real-time is unfeasible energy-wise. 

For these reasons, real-time processing and data annotation must be tackled in a complimentary manner, instead of the general practice of only targeting overall accuracy improvement. To build wildfire intelligence at the edge, we propose developments on two tracks of solutions: (i) data annotation and (ii) on the edge deployment. The need for considerable effort in these two avenues stems from both having very distinct development requirements and performance evaluation metrics. On the one hand, improving data annotation capacity is essential to build high quality databases that can provide better sources for machine learning. On the other hand, for on the edge deployment the development architectures need to compromise on robustness and architectural parsimony in order to be efficient for edge processing. Whereas the first objective is driven foremost by accuracy, the second goal must emphasize timeliness.

Acknowledgments
This work was supported by FCT – Fundação para a Ciência e a Tecnologia, I.P., through IDMEC, under project Eye in the Sky, PCIF/SSI/0103/2018, and through IDMEC, under LAETA, project UIDB/50022/2020. M. J. Sousa acknowledges the support from FCT, through the Ph.D. Scholarship SFRH/BD/145559/2019, co-funded by the European Social Fund (ESF).

How to cite: Sousa, M. J., Moutinho, A., and Almeida, M.: Building wildfire intelligence at the edge: bridging the gap from development to deployment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12432, https://doi.org/10.5194/egusphere-egu22-12432, 2022.

EGU22-20 | Presentations | ITS2.6/AS5.1

PRECISIONPOP: a multi-scale monitoring system for poplar plantations integrating field, aerial and satellite remote sensing 

Francesco Chianucci, Francesca Giannetti, Clara Tattoni, Nicola Puletti, Achille Giorcelli, Carlo Bisaglia, Elio Romano, Massimo Brambilla, Piermario Chiarabaglio, Massimo Gennaro, Giovanni d'Amico, Saverio Francini, Walter Mattioli, Domenico Coaloa, Piermaria Corona, and Gherardo Chirici

Poplar (Populus spp.) plantations are globally widespread in the Northern Hemisphere, and provide a wide range of benefits and products, including timber, carbon sequestration and phytoremediation. Because of poplar specific features (fast growth, short rotation) the information needs require frequent updates, which exceed the traditional scope of National Forest Inventories, implying the need for ad-hoc monitoring solutions.

Here we presented a regional-level multi-scale monitoring system developed for poplar plantations, which is based on the integration of different remotely-sensed informations at different spatial scales, developed in Lombardy (Northern Italy) region. The system is based on three levels of information: 1) At plot scale, terrestrial laser scanning (TLS) was used to develop non-destructive tree stem volume allometries in calibration sites; the produced allometries were then used to estimate plot-level stand parameters from field inventory; additional canopy structure attributes were derived using field digital cover photography. 2) At farm level, unmanned aerial vehicles (UAVs) equipped with multispectral sensors were used to upscale results obtained from field data. 3) Finally, both field and unmanned aerial estimates were used to calibrate a regional-scale supervised continuous monitoring system based on multispectral Sentinel-2 imagery, which was implemented and updated in a Google Earth Engine platform.

The combined use of multi-scale information allowed an effective management and monitoring of poplar plantations. From a top-down perspective, the continuous satellite monitoring system allowed the detection of early warning poplar stress, which are suitable for variable rate irrigation and fertilizing scheduling. From a bottom-up perspective, the spatially explicit nature of TLS measurements allows better integration with remotely sensed data, enabling a multiscale assessment of poplar plantation structure with different levels of detail, enhancing conventional tree inventories, and supporting effective management strategies. Finally, use of UAV is key in poplar plantations as their spatial resolution is suited for calibrating metrics from coarser remotely-sensed products, reducing or avoiding the need of ground measurements, with a significant reduction of time and costs.

How to cite: Chianucci, F., Giannetti, F., Tattoni, C., Puletti, N., Giorcelli, A., Bisaglia, C., Romano, E., Brambilla, M., Chiarabaglio, P., Gennaro, M., d'Amico, G., Francini, S., Mattioli, W., Coaloa, D., Corona, P., and Chirici, G.: PRECISIONPOP: a multi-scale monitoring system for poplar plantations integrating field, aerial and satellite remote sensing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-20, https://doi.org/10.5194/egusphere-egu22-20, 2022.

EGU22-124 | Presentations | ITS2.6/AS5.1

Unsupervised machine learning driven Prospectivity analysis of REEs in NE India 

Malcolm Aranha and Alok Porwal

Traditional mineral prospectivity modelling for mineral exploration and targeting relies heavily on manual data filtering and processing to extract desirable geologic features based on expert knowledge. It involves the integration of geological predictor maps that are manually derived by time-consuming and labour-intensive pre-processing of primary geoscientific data to serve as spatial proxies of mineralisation processes. Moreover, the selection of these spatial proxies is guided by conceptual genetic modelling of the targeted deposit type, which may be biased by the subjective preference of an expert geologist. This study applies Self-Organising Maps (SOM), a neural network-based unsupervised machine learning clustering algorithm, to gridded geophysical and topographical datasets in order to identify and delineate regional-scale exploration targets for carbonatite-alkaline-complex-related REE deposits in northeast India. The study did not utilise interpreted and processed or manually generated data, such as surface or bed-rock geological maps, fault traces, etc., and relies on the algorithm to identify crucial features and delineate prospective areas. The obtained results were then compared with those obtained from a previous supervised knowledge-driven prospectivity analysis. The results were found to be comparable. Therefore, unsupervised machine learning algorithms are reliable tools to automate the manual process of mineral prospectivity modelling and are robust, time-saving alternatives to knowledge-driven or supervised data-driven prospectivity modelling. These methods would be instrumental in unexplored terrains for which there is little or no geological knowledge available. 

How to cite: Aranha, M. and Porwal, A.: Unsupervised machine learning driven Prospectivity analysis of REEs in NE India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-124, https://doi.org/10.5194/egusphere-egu22-124, 2022.

EGU22-654 | Presentations | ITS2.6/AS5.1

On the derivation of data-driven models for partially observed systems 

Said Ouala, Bertrand Chapron, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

When considering the modeling of dynamical systems, the increasing interest in machine learning, artificial intelligence and more generally, data-driven representations, as well as the increasing availability of data, motivated the exploration and definition of new identification techniques. These new data-driven representations aim at solving modern questions regarding the modeling, the prediction and ultimately, the understanding of complex systems such as the ocean, the atmosphere and the climate. 

In this work, we focus on one question regarding the ability to define a (deterministic) dynamical model from a sequence of observations. We focus on sea surface observations and show that these observations typically relate to some, but not all, components of the underlying state space, making the derivation of a deterministic model in the observation space impossible. In this context, we formulate the identification problem as the definition, from data, of an embedding of the observations, parameterized by a differential equation. When compared to state-of-the-art techniques based on delay embedding and linear decomposition of the underlying operators, the proposed approach benefits from all the advances in machine learning and dynamical systems theory in order to define, constrain and tune the reconstructed sate space and the approximate differential equation. Furthermore, the proposed embedding methodology naturally extends to cases in which a dynamical prior (derived for example using physical principals) is known, leading to relevant physics informed data-driven models. 

How to cite: Ouala, S., Chapron, B., Collard, F., Gaultier, L., and Fablet, R.: On the derivation of data-driven models for partially observed systems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-654, https://doi.org/10.5194/egusphere-egu22-654, 2022.

EGU22-1255 | Presentations | ITS2.6/AS5.1

A Deep Learning approach to de-bias Air Quality forecasts, using heterogeneous Open Data sources as reference 

Antonio Pérez, Mario Santa Cruz, Johannes Flemming, and Miha Razinger

The degradation of air quality is a challenge that policy-makers face all over the world. According to the World Health Organisation, air pollution causes an estimate of 7 million premature deaths every year. In this context, air quality forecasts are crucial tools for decision- and policy-makers, to achieve data-informed decisions.

Global forecasts, such as the Copernicus Atmosphere monitoring service model (CAMS), usually exhibit biases: systematic deviations from observations. Adjusting these biases is typically the first step towards obtaining actionable air quality forecasts. It is especially relevant in health-related decisions, when the metrics of interest depend on specific thresholds.

AQ (Air quality) - Bias correction was a project funded by the ECMWF Summer of Weather Code (ESOWC) 2021 whose aim is to improve CAMS model forecasts for air quality variables (NO2, O3, PM2.5), using as a reference the in-situ observations provided by OpenAQ. The adjustment, based on machine learning methods, was performed over a set of specific interesting locations provided by the ECMWF, for the period June 2019 to March 2021.

The machine learning approach uses three different deep learning based models, and an extra neural network that gathers the output of the three previous models. From the three DL-based models, two of them are independent and follow the same structure built upon the InceptionTime module: they use both meteorological and air quality variables, to exploit the temporal variability and to extract the most meaningful features of the past [t-24h, t-23h, … t-1h] and future [t, t+1h, …, t+23h] CAMS predictions. The third model uses the station static attributes (longitude, latitude and elevation), and a multilayer perceptron interacts with the station attributes. The extracted features from these three models are fed into another multilayer perceptron, to predict the upcoming errors with hourly resolution [t, t+1h, …, t+23h]. As a final step, 5 different initializations are considered, assembling them with equal weights to have a more stable regressor.

Previous to the modelisation, CAMS forecasts of air quality variables were actually biassed independently from the location of interest and the variable (on average: biasNO2 = -22.76, biasO3 = 44.30, biasPM2.5 = 12.70). In addition, the skill of the model, measured by the Pearson correlation, did not reach 0.5 for any of the variables—with remarkable low values for NO2 and O3 (on average: pearsonNO2 = 0.10, pearsonO3 = 0.14).

AQ-BiasCorrection modelisation properly corrects these biases. Overall, the number of stations that improve the biases both in train and test sets are: 52 out of 61 (85%) for NO2, 62 out of 67 (92%) for O3, and 80 out of 102 (78%) for PM2.5. Furthermore, the bias improves with declines of -1.1%, -9.7% and -13.9% for NO2, O3 and PM2.5 respectively. In addition, there is an increase in the model skill measured through the Pearson correlation, reaching values in the range of 100-400% for the overall improvement of the variable skill.

How to cite: Pérez, A., Santa Cruz, M., Flemming, J., and Razinger, M.: A Deep Learning approach to de-bias Air Quality forecasts, using heterogeneous Open Data sources as reference, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1255, https://doi.org/10.5194/egusphere-egu22-1255, 2022.

EGU22-1992 | Presentations | ITS2.6/AS5.1

Approximating downward short-wave radiation flux using all-sky optical imagery using machine learning trained on DASIO dataset. 

Vasilisa Koshkina, Mikhail Krinitskiy, Nikita Anikin, Mikhail Borisov, Natalia Stepanova, and Alexander Osadchiev

Solar radiation is the main source of energy on Earth. Cloud cover is the main physical factor limiting the downward short-wave radiation flux. In modern models of climate and weather forecasts, physical models describing the passage of radiation through clouds may be used. This is a computationally extremely expensive option for estimating downward radiation fluxes. Instead, one may use parameterizations which are simplified schemes for approximating environmental variables. The purpose of this work is to improve the accuracy of the existing parametrizations of downward shortwave radiation fluxes. We solve the problem using various machine learning (ML) models for approximating downward shortwave radiation flux using all-sky optical imagery. We assume that an all-sky photo contains complete information about the downward shortwave radiation. We examine several types of ML models that we trained on dataset of all-sky imagery accompanied by short-wave radiation flux measurements. The Dataset of All-Sky Imagery over the Ocean (DASIO) is collected in Indian, Atlantic and Arctic oceans during several oceanic expeditions from 2014 till 2021. The quality of the best classic ML model is better compared to existing parameterizations known from literature. We will show the results of our study regarding classic ML models as well as the results of an end-to-end ML approach involving convolutional neural networks. Our results allow us to assume one may acquire downward shortwave radiation fluxes directly from all-sky imagery. We will also cover some downsides and limitations of the presented approach.

How to cite: Koshkina, V., Krinitskiy, M., Anikin, N., Borisov, M., Stepanova, N., and Osadchiev, A.: Approximating downward short-wave radiation flux using all-sky optical imagery using machine learning trained on DASIO dataset., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1992, https://doi.org/10.5194/egusphere-egu22-1992, 2022.

EGU22-2058 | Presentations | ITS2.6/AS5.1

Deep learning for ensemble forecasting 

Rüdiger Brecht and Alexander Bihlo
Ensemble prediction systems are an invaluable tool for weather prediction. Practically, ensemble predictions are obtained by running several perturbed numerical simulations. However, these systems are associated with a high computational cost and often involve statistical post-processing steps to improve their qualities.
Here we propose to use a deep-learning-based algorithm to learn the statistical properties of a given ensemble prediction system, such that this system will not be needed to simulate future ensemble forecasts. This way, the high computational costs of the ensemble prediction system can be avoided while still obtaining the statistical properties from a single deterministic forecast. We show preliminary results where we demonstrate the ensemble prediction properties for a shallow water unstable jet simulation on the sphere. 

How to cite: Brecht, R. and Bihlo, A.: Deep learning for ensemble forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2058, https://doi.org/10.5194/egusphere-egu22-2058, 2022.

Numerical weather prediction (NWP) models are currently popularly used for operational weather forecast in meteorological centers. The NWP models describe the flow of fluids by employing a set of governing equations, physical parameterization schemes and initial and boundary conditions. Thus, it often face bias of prediction due to insufficient data assimilation, assumptions or approximations of dynamical and physical processes. To make gridded forecast of rainfall with high confidence, in this study, we present a data-driven deep learning model for correction of rainfall from NWP model, which mainly includes a confidence network and a combinatorial network. Meanwhile, a focal loss is introduced to deal with the characteristics of longtail-distribution of rainfall. It is expected to alleviate the impact of the large span of rainfall magnitude by transferring the regression problem into several binary classification problems. The deep learning model is used to correct the gridded forecasts of rainfall from the European Centre for Medium-Range Weather Forecast Integrated Forecasting System global model (ECMWF-IFS) with a forecast lead time of 24 h to 240 h in Eastern China. First, the rainfall forecast correction problem is treated as an image-to-image translation problem in deep learning under the neural networks. Second, the ECMWF-IFS forecasts and rainfall observations in recent years are used as training, validation, and testing datasets. Finally, the correction performance of the new machine learning model is evaluated and compared to several classical machine learning algorithms. By performing a set of experiments for rainfall forecast error correction, it is found that the new model can effectively forecast rainfall over East China region during the flood season of the year 2020. Experiments also demonstrate that the proposed approach generally performs better in bias correction of rainfall prediction than most of the classical machine learning approaches .

How to cite: Ma, L.: A Deep Learning Bias Correction Approach for Rainfall Numerical Prediction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2095, https://doi.org/10.5194/egusphere-egu22-2095, 2022.

EGU22-2893 | Presentations | ITS2.6/AS5.1 | Highlight

Bias Correction of Operational Storm Surge Forecasts Using Neural Networks 

Paulina Tedesco, Jean Rabault, Martin Lilleeng Sætra, Nils Melsom Kristensen, Ole Johan Aarnes, Øyvind Breivik, and Cecilie Mauritzen

Storm surges can give rise to extreme floods in coastal areas. The Norwegian Meteorological Institute (MET Norway) produces 120-hour regional operational storm surge forecasts along the coast of Norway based on the Regional Ocean Modeling System (ROMS). Despite advances in the development of models and computational capability, forecast errors remain large enough to impact response measures and issued alerts, in particular, during the strongest storm events. Reducing these errors will positively impact the efficiency of the warning systems while minimizing efforts and resources spent on mitigation.

Here, we investigate how forecasts can be improved with residual learning, i.e., training data-driven models to predict, and correct, the error in the ROMS output. For this purpose, sea surface height data from stations around Norway were collected and compared with the ROMS output.

We develop two different residual learning frameworks that can be applied on top of the ROMS output. In the first one, we perform binning of the model error, conditionalized by pressure, wind, and waves. Clear error patterns are visible when the error conditioned by the wind is plotted in a polar plot for each station. These error maps can be stored as correction lookup tables to be applied on the ROMS output. However, since wind, pressure, and waves are correlated, we cannot simultaneously correct the error associated with each variable using this method. To overcome this limitation, we develop a second method, which resorts to Neural Networks (NNs) to perform nonlinear modeling of the error pattern obtained at each station. 

The residual NN method strongly outperforms the error map method, and is a promising direction for correcting storm surge models operationally. Indeed, i) this method is applied on top of the existing model and requires no changes to it, ii) all predictors used for NN inference are available operationally, iii) prediction by the NN is very fast, typically a few seconds per station, and iv) the NN correction can be provided to a human expert who gets to inspect it, compare it with the ROMS output, and see how much correction is brought by the NN. Using this NN residual error correction method, the RMS error in the Oslofjord is reduced by typically 7% for lead times of 24 hours, 17% for 48 hours, and 35% for 96 hours.

How to cite: Tedesco, P., Rabault, J., Sætra, M. L., Kristensen, N. M., Aarnes, O. J., Breivik, Ø., and Mauritzen, C.: Bias Correction of Operational Storm Surge Forecasts Using Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2893, https://doi.org/10.5194/egusphere-egu22-2893, 2022.

EGU22-3977 | Presentations | ITS2.6/AS5.1 | Highlight

Learning quasi-geostrophic turbulence parametrizations from a posteriori metrics 

Hugo Frezat, Julien Le Sommer, Ronan Fablet, Guillaume Balarac, and Redouane Lguensat

Machine learning techniques are now ubiquitous in the geophysical science community. They have been applied in particular to the prediction of subgrid-scale parametrizations using data that describes small scale dynamics from large scale states. However, these models are then used to predict temporal trajectories, which is not covered by this instantaneous mapping. Following the model trajectory during training can be done using an end-to-end approach, where temporal integration is performed using a neural network. As a consequence, the approach is shown to optimize a posteriori metrics, whereas the classical instantaneous training is limited to a priori ones. When applied on a specific energy backscatter problem, found in quasi-geostrophic turbulent flows, the strategy demonstrates long-term stability and high fidelity statistical performance, without any increase in computational complexity during rollout. These improvements may question the future development of realistic subgrid-scale parametrizations in favor of differentiable solvers, required by the a posteriori strategy.

How to cite: Frezat, H., Le Sommer, J., Fablet, R., Balarac, G., and Lguensat, R.: Learning quasi-geostrophic turbulence parametrizations from a posteriori metrics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3977, https://doi.org/10.5194/egusphere-egu22-3977, 2022.

EGU22-4062 | Presentations | ITS2.6/AS5.1

Climatological Ocean Surface Wave Projections using Deep Learning 

Peter Mlakar, Davide Bonaldo, Antonio Ricchi, Sandro Carniel, and Matjaž Ličer

We present a numerically cheap machine-learning model which accurately emulates the performances of the surface wave model Simulating WAves Near Shore (SWAN) in the Adriatic basin (north-east Mediterranean Sea).

A ResNet50 inspired deep network architecture with customized spatio-temporal attention layers was used, the network being trained on a 1970-1997 dataset of time-dependent features based on wind fields retrieved from the COSMO-CLM regional climate model (The authors acknowledge Dr. Edoardo Bucchignani (Meteorology Laboratory, Centro Italiano Ricerche Aerospaziali -CIRA-, Capua, Italy), for providing the COSMO-CLM wind fields). SWAN surface wave model outputs for the period of 1970-1997 are used as labels. The period 1998-2000 is used to cross-validate that the network very accurately reproduces SWAN surface wave features (i.e. significant wave height, mean wave period, mean wave direction) at several locations in the Adriatic basin. 

After successful cross validation, a series of projections of ocean surface wave properties based on climate model projections for the end of 21st century (under RCP 8.5 scenario) are performed, and shifts in the emulated wave field properties are discussed.

How to cite: Mlakar, P., Bonaldo, D., Ricchi, A., Carniel, S., and Ličer, M.: Climatological Ocean Surface Wave Projections using Deep Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4062, https://doi.org/10.5194/egusphere-egu22-4062, 2022.

EGU22-4493 | Presentations | ITS2.6/AS5.1 | Highlight

Semi-automatic tuning procedure for a GCM targeting continental surfaces: a first experiment using in situ observations 

Maëlle Coulon--Decorzens, Frédérique Cheruy, and Frédéric Hourdin

The tuning or calibration of General Circulation Models (GCMs) is an essential stage for their proper behavior. The need to have the best climate projections in the regions where we live drives the need to tune the models in particular towards the land surface, bearing in mind that the interactions between the atmosphere and the land surface remain a key source of uncertainty in regional-scale climate projections [1].

For a long time, this tuning has been done by hand, based on scientific expertise and has not been sufficiently documented [2]. Recent tuning tools offer the possibility to accelerate climate model development, providing a real tuning formalism as well as a new way to understand climate models. High Tune explorer is one of these statistic tuning tool, involving machine learning and based on uncertainty quantification. It aims to reduce the range of free parameters that allow realistic model behaviour [3]. A new automatic tuning experiment was developed with this tool for the atmospheric component of the IPSL GCM model, LMDZ. It was first tuned at the process level, using several single column test cases compared to large eddies simulations; and then at the global level by targeting radiative metrics at the top of the atmosphere [4].

We propose to add a new step to this semi-automatic tuning procedure targeting atmosphere and land-surface interactions. The first aspect of the proposition is to compare coupled atmosphere-continent simulations (here running LMDZ-ORCHIDEE) with in situ observations from the SIRTA observatory located southwest of Paris. In situ observations provide hourly joint colocated data with a strong potential for the understanding of the processes at stake and their representation in the model. These data are also subject to much lower uncertainties than the satellite inversions with respect to the surface observations. In order to fully benefit from the site observations, the model winds are nudged toward reanalysis. This forces the simulations to follow the effective meteorological sequence, thus allowing the comparison between simulations and observations at the process time scale. The removal of the errors arising from the representation of large-scale dynamics makes the tuning focus on the representation of physical processes «at a given meteorological situation». Finally, the model grid is zoomed in on the SIRTA observatory in order to reduce the computational cost of the simulations while preserving a fine mesh around this observatory.

We show the results of this new tuning step, which succeeds in reducing the domain of acceptable free parameters as well as the dispersion of the simulations. This method, which is less computationally costly than global tuning, is therefore a good way to precondition the latter. It allows the joint tuning of atmospheric and land surface models, traditionally tuned separately [5], and has the advantage of remaining close to the processes and thus improving their understanding.

References:

[1] Cheruy et al., 2014, https://doi.org/10.1002/2014GL061145

[2] Hourdin et al., 2017, https://doi.org/10.1175/BAMS-D-15-00135.1

[3] Couvreux et al., 2021, https://doi.org/10.1029/2020MS002217

[4] Hourdin et al., 2021, https://doi.org/10.1029/2020MS002225

[5] Cheruy et al., 2020, https://doi.org/10.1029/2019MS002005

How to cite: Coulon--Decorzens, M., Cheruy, F., and Hourdin, F.: Semi-automatic tuning procedure for a GCM targeting continental surfaces: a first experiment using in situ observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4493, https://doi.org/10.5194/egusphere-egu22-4493, 2022.

EGU22-4923 | Presentations | ITS2.6/AS5.1

Constrained Generative Adversarial Networks for Improving Earth System Model Precipitation 

Philipp Hess, Markus Drüke, Stefan Petri, Felix Strnad, and Niklas Boers

The simulation of precipitation in numerical Earth system models (ESMs) involves various processes on a wide range of scales, requiring high temporal and spatial resolution for realistic simulations. This can lead to biases in computationally efficient ESMs that have a coarse resolution and limited model complexity. Traditionally, these biases are corrected by relating the distributions of historical simulations with observations [1]. While these methods successfully improve the modelled statistics, unrealistic spatial features that require a larger spatial context are not addressed.

Here we apply generative adversarial networks (GANs) [2] to transform precipitation of the CM2Mc-LPJmL ESM [3] into a bias-corrected and more realistic output. Feature attribution shows that the GAN has correctly learned to identify spatial regions with the largest bias during training. Our method presents a general bias correction framework that can be extended to a wider range of ESM variables to create highly realistic but computationally inexpensive simulations of future climates. We also discuss the generalizability of our approach to projections from CMIP6, given that the GAN is only trained on historical data.

[1] A.J. Cannon et al. "Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes?." Journal of Climate 28.17 (2015): 6938-6959.

[2] I. Goodfellow et al. "Generative adversarial nets." Advances in neural information processing systems 27 (2014).

[3] M. Drüke et al. "CM2Mc-LPJmL v1.0: Biophysical coupling of a process-based dynamic vegetation model with managed land to a general circulation model." Geoscientific Model Development 14.6 (2021): 4117--4141.

How to cite: Hess, P., Drüke, M., Petri, S., Strnad, F., and Boers, N.: Constrained Generative Adversarial Networks for Improving Earth System Model Precipitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4923, https://doi.org/10.5194/egusphere-egu22-4923, 2022.

EGU22-5219 | Presentations | ITS2.6/AS5.1 | Highlight

Neural Partial Differential Equations for Atmospheric Dynamics 

Maximilian Gelbrecht and Niklas Boers

When predicting complex systems such as parts of the Earth system, one typically relies on differential equations which can often be incomplete, missing unknown influences or higher order effects. Using the universal differential equations framework, we can augment the equations with artificial neural networks that can compensate these deficiencies. We show that this can be used to predict the dynamics of high-dimensional spatiotemporally chaotic partial differential equations, such as the ones describing atmospheric dynamics. In a first step towards a hybrid atmospheric model, we investigate the Marshall Molteni Quasigeostrophic Model in the form of a Neural Partial Differential Equation. We use it in synthetic examples where parts of the governing equations are replaced with artificial neural networks (ANNs) and demonstrate how the ANNs can recover those terms.

How to cite: Gelbrecht, M. and Boers, N.: Neural Partial Differential Equations for Atmospheric Dynamics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5219, https://doi.org/10.5194/egusphere-egu22-5219, 2022.

EGU22-5631 | Presentations | ITS2.6/AS5.1

Autonomous Assessment of Source Area Distributions for Sections in Lagrangian Particle Release Experiments 

Carola Trahms, Patricia Handmann, Willi Rath, Matthias Renz, and Martin Visbeck

Lagrangian experiments for particle tracing in atmosphere or ocean models and their analysis are a cornerstone of earth-system studies. They cover diverse study objectives such as the identification of pathways or source regions. Data for Lagrangian studies are generated by releasing virtual particles in one or in multiple locations of interest and simulating their advective-diffusive behavior backwards or forwards in time. Identifying main pathways connecting two regions of interest is often done by counting the trajectories that reach both regions. Here, the exact source and target region must be defined manually by a researcher. Manually defining the importance and exact location of these regions introduces a highly subjective perspective into the analysis. Additionally, to investigate all major target regions, all of them must be defined manually and the data must be analyzed accordingly. This human element slows down and complicates large scale analyses with many different sections and possible source areas.

We propose to significantly reduce the manual aspect by automatizing this process. To this end, we combine methods from different areas of machine learning and pattern mining into a sequence of steps. First, unsupervised methods, i.e., clustering, identify possible source areas on a randomized subset of the data. In a successive second step, supervised learning, i.e., classification, labels the positions along the trajectories according to their most probable source area using the previously automatically identified clusters as labels. The results of this approach can then be compared quantitatively to the results of analyses with manual definition of source areas and border-hitting-based labeling of the trajectories. Preliminary findings suggest that this approach could indeed help greatly to objectify and fasten the analysis process for Lagrangian Particle Release Experiments.

How to cite: Trahms, C., Handmann, P., Rath, W., Renz, M., and Visbeck, M.: Autonomous Assessment of Source Area Distributions for Sections in Lagrangian Particle Release Experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5631, https://doi.org/10.5194/egusphere-egu22-5631, 2022.

EGU22-5632 | Presentations | ITS2.6/AS5.1

Data-Driven Sentinel-2 Based Deep Feature Extraction to Improve Insect Species Distribution Models 

Joe Phillips, Ce Zhang, Bryan Williams, and Susan Jarvis

Despite being a vital part of ecosystems, insects are dying out at unprecedented rates across the globe. To help address this in the UK, UK Centre for Ecology & Hydrology (UKCEH) are creating a tool to utilise insect species distribution models (SDMs) for better facilitating future conservation efforts via volunteer-led insect tracking procedures. Based on these SDM models, we explored the inclusion of additional covariate information via 10-20m2 bands of temporally-aggregated Sentinel-2 data taken over the North of England in 2017 to improve the predictive performance. Here, we matched the 10-20m2 resolution of the satellite data to the coarse 1002 insect observation data via four methodologies of increasing complexity. First, we considered standard pixel-based approaches, performing aggregation by taking both the mean and standard deviation over the 10m2 pixels. Second, we explored object-based approaches to address the modifiable areal unit problem by applying the SNIC superpixels algorithm over the extent, with the mean and standard deviation of the pixels taken within each segment. The resulting dataset was then re-projected to a resolution of 100m2 by taking the modal values of the 10m2 pixels, which were provided with the aggregated values of their parent segment. Third, we took the UKCEH-created 2017 Land Cover Map (LCM) dataset and sampled 42,000, random 100m2 areas, evenly distributed about their modal land cover classes. We trained the U-Net Deep Learning model using the Sentinel-2 satellite images and LCM classes, by which data-driven features were extracted from the network over each 100m2 extent. Finally, as with the second approach, we used the superpixels segments instead as the units of analysis, sampling 21,000 segments, and taking the smallest bounding box around each of them. An attention-based U-Net was then adopted to mask each of the segments from their background and extract deep features. In a similar fashion to the second approach, we then re-projected the resulting dataset to a resolution of 100m2, taking the modal segment values accordingly. Using cross-validated AUCs over various species of moths and butterflies, we found that the object-based deep learning approach achieved the best accuracy when used with the SDMs. As such, we conclude that the novel approach of spatially aggregating satellite data via object-based, deep feature extraction has the potential to benefit similar, model-based aggregation needs and catalyse a step-change in ecological and environmental applications in the future.

How to cite: Phillips, J., Zhang, C., Williams, B., and Jarvis, S.: Data-Driven Sentinel-2 Based Deep Feature Extraction to Improve Insect Species Distribution Models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5632, https://doi.org/10.5194/egusphere-egu22-5632, 2022.

EGU22-5681 | Presentations | ITS2.6/AS5.1

AtmoDist as a new pathway towards quantifying and understanding atmospheric predictability 

Sebastian Hoffmann, Yi Deng, and Christian Lessig

The predictability of the atmosphere is a classical problem that has received much attention from both a theoretical and practical point of view. In this work, we propose to use a purely data-driven method based on a neural network to revisit the problem. The analysis is built upon the recently introduced AtmoDist network that has been trained on high-resolution reanalysis data to provide a probabilistic estimate of the temporal difference between given atmospheric fields, represented by vorticity and divergence. We define the skill of the network for this task as a new measure of atmospheric predictability, hypothesizing that the prediction of the temporal differences by the network will be more susceptible to errors when the atmospheric state is intrinsically less predictable. Preliminary results show that for short timescales (3-48 hours) one sees enhanced predictability in warm season compared to cool season over northern midlatitudes, and lower predictability over ocean compared to land. These findings support the hypothesis that across short timescales, AtmoDist relies on the recurrences of mesoscale convection with coherent spatiotemporal structures to connect spatial evolutions to temporal differences. For example, the prevalence of mesoscale convective systems (MCSs) over the central US in boreal warm season can explain the increase of mesoscale predictability there and oceanic zones marked by greater predictability corresponds well to regions of elevated convective activity such as the Pacific ITCZ. Given the dependence of atmospheric predictability on geographic location, season, and most importantly, timescales, we further apply the method to synoptic scales (2-10 days), where excitation and propagation of large-scale disturbances such as Rossby wave packets are expected to provide the connection between temporal and spatial differences. The design of the AtmoDist network is thereby adapted to the prediction range, for example, the size of the local patches that serve as input to AtmoDist is chosen based on the spatiotemporal atmospheric scales that provide the expected time and space connections.

By providing to the community a powerful, purely data-driven technique for quantifying, evaluating, and interpreting predictability, our work lays the foundation for efficiently detecting the existence of sub-seasonal to seasonal (S2S) predictability and, by further analyzing the mechanism of AtmoDist, understanding the physical origins, which bears major scientific and socioeconomic significances.

How to cite: Hoffmann, S., Deng, Y., and Lessig, C.: AtmoDist as a new pathway towards quantifying and understanding atmospheric predictability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5681, https://doi.org/10.5194/egusphere-egu22-5681, 2022.

EGU22-5746 | Presentations | ITS2.6/AS5.1

Model Output Statistics (MOS) and Machine Learning applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores 

Hervé Petetin, Dene Bowdalo, Pierre-Antoine Bretonnière, Marc Guevara, Oriol Jorba, Jan Mateu armengol, Margarida Samso Cabre, Kim Serradell, Albert Soret, and Carlos Pérez García-Pando

Air quality (AQ) forecasting systems are usually built upon physics-based numerical models that are affected by a number of uncertainty sources. In order to reduce forecast errors, first and foremost the bias, they are often coupled with Model Output Statistics (MOS) modules. MOS methods are statistical techniques used to correct raw forecasts at surface monitoring station locations, where AQ observations are available. In this study, we investigate to what extent AQ forecasts can be improved using a variety of MOS methods, including persistence (PERS), moving average (MA), quantile mapping (QM), Kalman Filter (KF), analogs (AN), and gradient boosting machine (GBM). We apply our analysis to the Copernicus Atmospheric Monitoring Service (CAMS) regional ensemble median O3 forecasts over the Iberian Peninsula during 2018–2019. A key aspect of our study is the evaluation, which is performed using a very comprehensive set of continuous and categorical metrics at various time scales (hourly to daily), along different lead times (1 to 4 days), and using different meteorological input data (forecast vs reanalyzed).

Our results show that O3 forecasts can be substantially improved using such MOS corrections and that this improvement goes much beyond the correction of the systematic bias. Although it typically affects all lead times, some MOS methods appear more adversely impacted by the lead time. When considering MOS methods relying on meteorological information and comparing the results obtained with IFS forecasts and ERA5 reanalysis, the relative deterioration brought by the use of IFS is minor, which paves the way for their use in operational MOS applications. Importantly, our results also clearly show the trade-offs between continuous and categorical skills and their dependencies on the MOS method. The most sophisticated MOS methods better reproduce O3 mixing ratios overall, with lowest errors and highest correlations. However, they are not necessarily the best in predicting the highest O3 episodes, for which simpler MOS methods can give better results. Although the complex impact of MOS methods on the distribution and variability of raw forecasts can only be comprehended through an extended set of complementary statistical metrics, our study shows that optimally implementing MOS in AQ forecast systems crucially requires selecting the appropriate skill score to be optimized for the forecast application of interest.

Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Armengol, J. M., Samso Cabre, M., Serradell, K., Soret, A., and Pérez Garcia-Pando, C.: Model Output Statistics (MOS) applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, Atmos. Chem. Phys. Discuss. [preprint], https://doi.org/10.5194/acp-2021-864, in review, 2021.

How to cite: Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Mateu armengol, J., Samso Cabre, M., Serradell, K., Soret, A., and Pérez García-Pando, C.: Model Output Statistics (MOS) and Machine Learning applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5746, https://doi.org/10.5194/egusphere-egu22-5746, 2022.

With the goal of developing a data-driven parameterization of unresolved gravity waves (GW) momentum transport for use in general circulation models (GCMs), we investigate neural network architectures that emulate the Alexander-Dunkerton 1999 (AD99) scheme, an existing physics-based GW parameterization. We analyze the distribution of errors as functions of shear-related metrics in an effort to diagnose the disparity between online and offline performance of the trained emulators, and develop a sampling algorithm to treat biases on the tails of the distribution without adversely impacting mean performance. 

It has been shown in previous efforts [1] that stellar offline performance does not necessarily guarantee adequate online performance, or even stability. Error analysis reveals that the majority of the samples are learned quickly, while some stubborn samples remain poorly represented. We find that the more error-prone samples are those with wind profiles that have large shears– this is consistent with physical intuition as gravity waves encounter a wider range of critical levels when experiencing large shear;  therefore parameterizing gravity waves for these samples is a more difficult, complex task. To remedy this, we develop a sampling strategy that performs a parameterized histogram equalization, a concept borrowed from 1D optimal transport. 

The sampling algorithm uses a linear mapping from the original histogram to a more uniform histogram parameterized by $t \in [0,1]$, where $t=0$ recovers the original distribution and $t=1$ enforces a completely uniform distribution. A given value $t$ assigns each bin a new probability which we then use to sample from each bin. If the new probability is smaller than the original, then we invoke sampling without replacement, but limited to a reduced number consistent with the new probability. If the new probability is larger than the original, then we repeat all the samples in the bin up to some predetermined maximum repeat value (a threshold to avoid extreme oversampling at the tails). We optimize this sampling algorithm with respect to $t$, the maximum repeat value, and the number and distribution (uniform or not) of the histogram bins. The ideal combination of those parameters yields errors that are closer to a constant function of the shear metrics while maintaining high accuracy over the whole dataset. Although we study the performance of this algorithm in the context of training a gravity wave parameterization emulator, this strategy can be used for learning datasets with long tail distributions where the rare samples are associated with low accuracy. Instances of this type of datasets are prevalent in earth system dynamics: launching of gravity waves, and extreme events like hurricanes, heat waves are just a few examples. 

[1] Espinosa, Z. I., A. Sheshadri, G. R. Cain, E. P. Gerber, and K. J. DallaSanta, 2021: A Deep Learning Parameterization of Gravity Wave Drag Coupled to an Atmospheric Global Climate Model,Geophys. Res. Lett., in review. [https://edwinpgerber.github.io/files/espinosa_etal-GRL-revised.pdf]

How to cite: Yang, L. and Gerber, E.: Sampling strategies for data-driven parameterization of gravity wave momentum transport, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5766, https://doi.org/10.5194/egusphere-egu22-5766, 2022.

EGU22-5980 | Presentations | ITS2.6/AS5.1 | Highlight

Probabilistic forecasting of heat waves with deep learning 

George Miloshevich, Valerian Jacques-Dumas, Pierre Borgnat, Patrice Abry, and Freddy Bouchet
Extreme events such as storms, floods, cold spells and heat waves are expected to have an increasing societal impact with climate change. However the study of rare events is complicated due to computational costs of highly complex models and lack of observations. However, with the help of machine learning synthetic models for forecasting can be constructed and cheaper resampling techniques can be developed. Consequently, this may also clarify more regional impacts of climate change. .

In this work, we perform detailed analysis of how deep neural networks (DNNs) can be used in intermediate-range forecasting of prolonged heat waves of duration of several weeks over synoptic spatial scales. In particular, we train a convolutional neural network (CNN) on the 7200 years of a simulation of a climate model. As such, we are interested in probabilistic prediction (committor function in transition theory). Thus we discuss the proper forecasting scores such as Brier skill score, which is popular in weather prediction, and cross-entropy skill, which is based on information-theoretic considerations. They allow us to measure the success of various architectures and investigate more efficient pipelines to extract the predictions from physical observables such as geopotential, temperature and soil moisture. A priori, the committor is hard to visualize as it is a high dimensional function of its inputs, the grid points of the climate model for a given field. Fortunately, we can construct composite maps conditioned to its values which reveal that the CNN is likely relying on the global teleconnection patterns of geopotential. On the other hand, soil moisture signal is more localized with predictive capability over much longer times in future (at least a month). The latter fact relates to the soil-atmosphere interactions. One expects the performance of DNNs to greatly improve with more data. We provide quantitative assessment of this fact. In addition, we offer more details on how the undersampling of negative events affects the knowledge of the committor function. We show that transfer learning helps ensure that the committor is a smooth function along the trajectory. This will be an important quality when such a committor will be applied in rare event algorithms for importance sampling. 
 
While DNNs are universal function approximators the issue of extrapolation can be somewhat problematic. In addressing this question we train a CNN on a dataset generated from a simulation without a diurnal cycle, where the feedbacks between soil moisture and heat waves appear to be significantly stronger. Nevertheless, when the CNN with the given weights is validated on a dataset generated from a simulation with a daily cycle the predictions seem to generalize relatively well, despite a small reduction in skill. This generality validates the approach. 
 

How to cite: Miloshevich, G., Jacques-Dumas, V., Borgnat, P., Abry, P., and Bouchet, F.: Probabilistic forecasting of heat waves with deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5980, https://doi.org/10.5194/egusphere-egu22-5980, 2022.

EGU22-6479 | Presentations | ITS2.6/AS5.1

Parameter inference and uncertainty quantification for an intermediate complexity climate model 

Benedict Roeder, Jakob Schloer, and Bedartha Goswami

Well-adapted parameters in climate models are essential to make accurate predictions
for future projections. In climate science, the record of precise and comprehensive obser-
vational data is rather short and parameters of climate models are often hand-tuned or
learned from artificially generated data. Due to limited and noisy data, one wants to use
Bayesian models to have access to uncertainties of the inferred parameters. Most popu-
lar algorithms for learning parameters from observational data like the Kalman inversion
approach only provide point estimates of parameters.
In this work, we compare two Bayesian parameter inference approaches applied to the
intermediate complexity model for the El Niño-Southern Oscillation by Zebiak & Cane. i)
The "Calibrate, Emulate, Sample" (CES) approach, an extension of the ensemble Kalman
inversion which allows posterior inference by emulating the model via Gaussian Processes
and thereby enables efficient sampling. ii) The simulation-based inference (SBI) approach
where the approximate posterior distribution is learned from simulated model data and
observational data using neural networks.
We evaluate the performance of both approaches by comparing their run times and the
number of required model evaluations, assess the scalability with respect to the number
of inference parameters, and examine their posterior distributions.

How to cite: Roeder, B., Schloer, J., and Goswami, B.: Parameter inference and uncertainty quantification for an intermediate complexity climate model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6479, https://doi.org/10.5194/egusphere-egu22-6479, 2022.

EGU22-6553 | Presentations | ITS2.6/AS5.1

Can simple machine learning methods predict concentrations of OH better than state of the art chemical mechanisms? 

Sebastian Hickman, Paul Griffiths, James Weber, and Alex Archibald

Concentrations of the hydroxyl radical, OH, control the lifetime of methane, carbon monoxide and other atmospheric constituents.  The short lifetime of OH, coupled with the spatial and temporal variability in its sources and sinks, makes accurate simulation of its concentration particularly challenging. To date, machine learning (ML) methods have been infrequently applied to global studies of atmospheric chemistry.

We present an assessment of the use of ML methods for the challenging case of simulation of the hydroxyl radical at the global scale, and show that several approaches are indeed viable.  We use observational data from the recent NASA Atmospheric Tomography Mission to show that machine learning methods are comparable in skill to state of the art forward chemical models and are capable, if appropriately applied, of simulating OH to within observational uncertainty.  

We show that a simple ridge regression model is a better predictor of OH concentrations in the remote atmosphere than a state of the art chemical mechanism implemented in a forward box model. Our work shows that machine learning may be an accurate emulator of chemical concentrations in atmospheric chemistry, which would allow a significant speed up in climate model runtime due to the speed and efficiency of simple machine learning methods. Furthermore, we show that relatively few predictors are required to simulate OH concentrations, suggesting that the variability in OH can be quantitatively accounted for by few observables with the potential to simplify the numerical simulation of atmospheric levels of key species such as methane. 

How to cite: Hickman, S., Griffiths, P., Weber, J., and Archibald, A.: Can simple machine learning methods predict concentrations of OH better than state of the art chemical mechanisms?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6553, https://doi.org/10.5194/egusphere-egu22-6553, 2022.

EGU22-6674 | Presentations | ITS2.6/AS5.1

The gravity wave parameterization calibration problem: A 1D QBO model testbed 

Ofer Shamir, L. Minah Yang, David S. Connelly, and Edwin P. Gerber

An essential step in implementing any new parameterization is calibration, where the parameterization is adjusted to work with an existing model and yield some desired improvement. In the context of gravity wave (GW) momentum transport, calibration is necessitated by the facts that: (i) Some GWs are always at least partially resolved by the model, and hence a parameterization should only account for the missing waves. Worse, the parameterization may need to correct for the misrepresentation of under-resolved GWs, i.e., coarse vertical resolution can bias GW breaking level, leading to erroneous momentum forcing. (ii) The parameterized waves depend on the resolved solution for both their sources and dissipation, making them susceptible to model biases. Even a "perfect" parameterization could then yield an undesirable result, e.g., an unrealistic Quasi-Biennial Oscillation (QBO).  While model-specific calibration is required, one would like a general "recipe" suitable for most models. From a practical point of view, the adoption of a new parameterization will be hindered by a too-demanding calibration process. This issue is of particular concern in the context of data-driven methods, where the number of tunable degrees of freedom is large (possibly in the millions). Thus, more judicious ways for addressing the calibration step are required. 

To address the above issues, we develop a 1D QBO model, where the "true" gravity wave momentum deposition is determined from a source distribution and critical level breaking, akin to a traditional physics-based GW parameterization. The control parameters associated with the source consist of the total wave flux (related to the total precipitation for convectively generated waves) and the spectrum width (related to the depth of convection). These parameters can be varied to mimic the variability in GW sources between different models, i.e., biases in precipitation variability. In addition, the model’s explicit diffusivity and vertical advection can be varied to mimic biases in model numerics and circulation, respectively. The model thus allows us to assess the ability of a data-driven parameterization to (i) extrapolate, capturing the response of GW momentum transport to a change in the model parameters and (ii) be calibrated, adjusted to maintain the desired simulation of the QBO in response to a change in the model parameters. The first property is essential for a parameterization to be used for climate prediction, the second, for a parameterization to be used at all. We focus in particular on emulators of the GW momentum transport based on neural network and regression trees, contrasting their ability to satisfy both of these goals.  

 

How to cite: Shamir, O., Yang, L. M., Connelly, D. S., and Gerber, E. P.: The gravity wave parameterization calibration problem: A 1D QBO model testbed, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6674, https://doi.org/10.5194/egusphere-egu22-6674, 2022.

All oceanic general circulation models (GCMs) include parametrizations of the unresolved subgrid-scale (eddy) effects on the large-scale motions, even at the (so-called) eddy-permitting resolutions. Among the many problems associated with the development of accurate and efficient eddy parametrizations, one problem is a reliable decomposition of a turbulent flow into resolved and unresolved (subgrid) scale components. Finding an objective way to separate eddies is a fundamental, critically important and unresolved problem. 
Here a statistically consistent correlation-based flow decomposition method (CBD) that employs the Gaussian filtering kernel with geographically varying topology – consistent with the observed local spatial correlations – achieves the desired scale separation. CBD is demonstrated for an eddy-resolving solution of the classical midlatitude double-gyre quasigeostrophic (QG) circulation, that possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. CBD facilitates a comprehensive analysis of the feedbacks of eddies on the large-scale flow via the transient part of the eddy forcing. A  `product integral' based on time-lagged correlation between the diagnosed eddy forcing and the evolving large-scale flow, uncovers robust `eddy backscatter' mechanism. Data-driven augmentation of non-eddy-resolving ocean model by stochastically-emulated eddy fields allows to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres.

  • N. Argawal, Ryzhov, E.A., Kondrashov, D., and P.S. Berloff, 2021: Correlation-based flow decomposition and statistical analysis of the eddy forcing, Journal of Fluid Mechanics, 924, A5. doi:10.1017/jfm.2021.604

  • N. Argawal, Kondrashov, D., Dueben, P., Ryzhov, E.A., and P.S. Berloff, 2021: A comparison of data-driven approaches to build low-dimensional ocean modelsJournal of Advances in Modelling Earth Systems, doi:10.1029/2021MS002537

 

How to cite: Kondrashov, D.: Towards physics-informed stochastic parametrizations of subgrid physics in ocean models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6859, https://doi.org/10.5194/egusphere-egu22-6859, 2022.

EGU22-7044 | Presentations | ITS2.6/AS5.1

Seismic Event Characterization using Manifold Learning Methods 

Yuri Bregman, Yochai Ben Horin, Yael Radzyner, Itay Niv, Maayan Kahlon, and Neta Rabin

Manifold learning is a branch of machine learning that focuses on compactly representing complex data-sets based on their fundamental intrinsic parameters. One such method is diffusion maps, which reduces the dimension of the data while preserving its geometric structure. In this work, diffusion maps are applied to several seismic event characterization tasks. The first task is automatic earthquake-explosion discrimination, which is an essential component of nuclear test monitoring. We also use this technique to automatically identify mine explosions and aftershocks following large earthquakes. Identification of such events helps to lighten the analysts’ burden and allow for timely production of reviewed seismic bulletins.

The proposed methods begin with a pre-processing stage in which a time–frequency representation is extracted from each seismogram while capturing common properties of seismic events and overcoming magnitude differences. Then, diffusion maps are used in order to construct a low-dimensional model of the original data. In this new low-dimensional space, classification analysis is carried out.

The algorithm’s discrimination performance is demonstrated on several seismic data sets. For instance, using the seismograms from EIL station, we identify arrivals that were caused by explosions at the nearby Eshidiya mine in Jordan. The model provides a visualization of the data, organized by its intrinsic factors. Thus, along with the discrimination results, we provide a compact organization of the data that characterizes the activity patterns in the mine.

Our results demonstrate the potential and strength of the manifold learning based approach, which may be suitable to other in other geophysics domains.

How to cite: Bregman, Y., Ben Horin, Y., Radzyner, Y., Niv, I., Kahlon, M., and Rabin, N.: Seismic Event Characterization using Manifold Learning Methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7044, https://doi.org/10.5194/egusphere-egu22-7044, 2022.

Accurate streamflow forecasts can provide guidance for reservoir managements, which can regulate river flows, manage water resources and mitigate flood damages. One popular way to forecast streamflow is to use bias-corrected meteorological forecasts to drive a calibrated hydrological model. But for cascade reservoirs, such approaches suffer significant deficiencies because of the difficulty to simulate reservoir operations by physical approach and the uncertainty of meteorological forecasts over small catchment. Another popular way is to forecast streamflow with machine learning method, which can fit a statistical model without inputs like reservoir operating rules. Thus, we integrate meteorological forecasts, land surface hydrological model and machine learning to forecast hourly streamflow over the Yantan catchment, which is one of the cascade reservoirs in the Hongshui River with streamflow influenced by both the upstream reservoir water release and the rainfall runoff process within the catchment.

Before evaluating the streamflow forecast system, it is necessary to investigate the skill by means of a series of specific hindcasts that isolate potential sources of predictability, like meteorological forcing and the initial condition (IC). Here, we use ensemble streamflow prediction (ESP)/reverse ESP (revESP) method to explore the impact of IC on hourly stream prediction. Results show that the effect of IC on runoff prediction is 16 hours. In the next step, we evaluate the hourly streamflow hindcasts during the rainy seasons of 2013-2017 performed by the forecast system. We use European Centre for Medium-Range Weather Forecasts perturbed forecast forcing from the THORPEX Interactive Grand Global Ensemble (TIGGE-ECMWF) as meteorological inputs to perform the hourly streamflow hindcasts. Compared with the ESP, the hydrometeorological ensemble forecast approach reduces probabilistic and deterministic forecast errors by 6% during the first 7 days. After integrated the long short-term memory (LSTM) deep learning method into the system, the deterministic forecast error can be further reduced by 6% in the first 72 hours. We also use historically observed streamflow to drive another LSTM model to perform an LSTM-only streamflow forecast. Results show that its skill sharply dropped after the first 24 hours, which indicates that the meteorology-hydrology modeling approach can improve the streamflow forecast.

How to cite: Liu, J. and Yuan, X.: Reservoir inflow forecast by combining meteorological ensemble forecast, physical hydrological simulation and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7093, https://doi.org/10.5194/egusphere-egu22-7093, 2022.

EGU22-7113 | Presentations | ITS2.6/AS5.1 | Highlight

Coupling regional air quality simulations of EURAD-IM with street canyon observations - a machine learning approach 

Charlotte Neubacher, Philipp Franke, Alexander Heinlein, Axel Klawonn, Astrid Kiendler-Scharr, and Anne-Caroline Lange

State of the art atmospheric chemistry transport models on regional scales as the EURAD-IM (EURopean Air pollution Dispersion-Inverse Model) simulate physical and chemical processes in the atmosphere to predict the dispersion of air pollutants. With EURAD-IM’s 4D-var data assimilation application, detailed analyses of the air quality can be conducted. These analyses allow for improvements of atmospheric chemistry forecast as well as emission source strength assessments. Simulations of EURAD-IM can be nested to a spatial resolution of 1 km, which does not correspond to the urban scale. Thus, inner city street canyon observations cannot be exploited since here, anthropogenic pollution vary vastly over scales of 100 m or less.

We address this issue by implementing a machine learning (ML) module into EURAD-IM, forming a hybrid model that enable bridging the representativeness gap between model resolution and inner-city observations. Thus, the data assimilation of EURAD-IM is strengthened by additional observations in urban regions. Our approach of the ML module is based on a neural network (NN) with relevant environmental information of street architecture, traffic density, meteorology, and atmospheric pollutant concentrations from EURAD-IM as well as the street canyon observation of pollutants as input features. The NN then maps the observed concentration from street canyon scale to larger spatial scales.

We are currently working with a fully controllable test environment created from EURAD-IM forecasts of the years 2020 and 2021 at different spatial resolutions. Here, the ML model maps the high-resolution hourly NO2 concentration to the concentration of the low resolution model grid. It turns out that it is very difficult for NNs to learn the hourly concentrations with equal accuracy using diurnal cycles of pollutant concentrations. Thus, we develop a model that uses an independent NN for each hour to support time-of-day learning. This allows to reduce the training error by a factor of 102. As a proof of concept, we trained the ML model in an overfitting regime where the mean squared training error reduce to 0.001% for each hour. Furthermore, by optimizing the hyperparameters and introducing regularization terms to reduce the overfitting, we achieved a validation error of 9−12% during night and 9−16% during day.

How to cite: Neubacher, C., Franke, P., Heinlein, A., Klawonn, A., Kiendler-Scharr, A., and Lange, A.-C.: Coupling regional air quality simulations of EURAD-IM with street canyon observations - a machine learning approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7113, https://doi.org/10.5194/egusphere-egu22-7113, 2022.

EGU22-7135 | Presentations | ITS2.6/AS5.1 | Highlight

How to calibrate a climate model with neural network based physics? 

Blanka Balogh, David Saint-Martin, and Aurélien Ribes

Unlike the traditional subgrid scale parameterizations used in climate models, current neural network (NN) parameterizations are only tuned offline, by minimizing a loss function on outputs from high resolution models. This approach often leads to numerical instabilities and long-term biases. Here, we propose a method to design tunable NN parameterizations and calibrate them online. The calibration of the NN parameterization is achieved in two steps. First, some model parameters are included within the NN model input. This NN model is fitted at once for a range of values of the parameters, using an offline metric. Second, once the NN parameterization has been plugged into the climate model, the parameters included among the NN inputs are optimized with respect to an online metric quantifying errors on long-term statistics. We illustrate our method with two simple dynamical systems. Our approach significantly reduces long-term biases of the climate model with NN based physics.

How to cite: Balogh, B., Saint-Martin, D., and Ribes, A.: How to calibrate a climate model with neural network based physics?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7135, https://doi.org/10.5194/egusphere-egu22-7135, 2022.

EGU22-8279 | Presentations | ITS2.6/AS5.1

Using deep learning to improve the spatial resolution of the ocean model 

Ihor Hromov, Georgy Shapiro, Jose Ondina, Sanjay Sharma, and Diego Bruciaferri

For the ocean models, the increase of spatial resolution is a matter of significant importance and thorough research. Computational resources limit our capabilities of the increase in model resolution. This constraint is especially true for the traditional dynamical models, for which an increase of a factor of two in the horizontal resolution results in simulation times increased approximately tenfold. One of the potential methods to relax this limitation is to use Artificial Intelligence methods, such as Neural Networks (NN). In this research, NN is applied to ocean circulation modelling. More specifically, NN is used on data output from the dynamical model to increase the spatial resolution of the model output. The main dataset being used is Sea Surface Temperature data in 0.05- and 0.02-degree horizontal resolutions for Irish Sea. 

Several NN architectures were applied to address the task. Generative Adversarial Networks (GAN), Convolutional Neural Networks (CNN) and Multi-level Wavelet CNN. They are used in other areas of knowledge in problems related to the increase of resolution. The work will contrast and compare the efficiency of and present a provisional assessment of the efficiency of each of the methods. 

How to cite: Hromov, I., Shapiro, G., Ondina, J., Sharma, S., and Bruciaferri, D.: Using deep learning to improve the spatial resolution of the ocean model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8279, https://doi.org/10.5194/egusphere-egu22-8279, 2022.

EGU22-8334 | Presentations | ITS2.6/AS5.1

Information theory solution approach for air-pollution sensors' location-allocation problem 

Barak Fishbain, Ziv Mano, and Shai Kendler

Urbanization and industrialization processes are accompanied by adverse environmental effects, such as air pollution. The first action in reducing air pollution is the detection of its source(s). This is achievable through monitoring. When deploying a sensor array, one must balance between the array's cost and performance. This optimization problem is known as the location-allocation problem. Here, a new solution approach, which draws its foundation from information theory is presented. The core of the method is air-pollution levels computed by a dispersion model in various meteorological conditions. The sensors are then placed in the locations which information theory identifies as the most uncertain. The method is compared with two other heuristics typically applied for solving the location-allocation problem. In the first, sensors are randomly deployed, in the second, the sensors are placed according to the maximal cumulative pollution levels (i.e., hot spot). For the comparison two simulated scenes were evaluated, one contains point sources and buildings, and the other also contains line sources (i.e., roads). It shows that the Entropy method resulted in a superior sensors' deployment compared to the other two approaches in terms of source apportionment and dense pollution field reconstruction from the sensors' network measurements.

How to cite: Fishbain, B., Mano, Z., and Kendler, S.: Information theory solution approach for air-pollution sensors' location-allocation problem, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8334, https://doi.org/10.5194/egusphere-egu22-8334, 2022.

EGU22-8719 | Presentations | ITS2.6/AS5.1

Multi-station Multivariate Multi-step Convection Nowcasting with Deep Neural Networks 

Sandy Chkeir, Aikaterini Anesiadou, and Riccardo Biondi

Extreme weather nowcasting has always been a challenging task in meteorology. Many research studies have been conducted to accurately forecast extreme weather events, related to rain rates and/or wind speed thresholds, in spatio-temporal scales. Over decades, this field gained attention in the artificial intelligence community which is aiming towards creating more accurate models using the latest algorithms and methods.  

In this work, within the H2020 SESAR ALARM project, we aim to nowcast rain and wind speed as target features using different input configurations of the available sources such as weather stations, lightning detectors, radar, GNSS receivers, radiosonde and radio occultations data. This nowcasting task has been firstly conducted at 14 local stations around Milano Malpensa Airport as a short-term temporal multi-step forecasting. At a second step, all stations will be combined, meaning that the forecasting becomes a spatio-temporal problem. Concretely, we want to investigate the predicted rain and wind speed values using the different inputs for two case scenarios: for each station, and joining all stations together. 

The chaotic nature of the atmosphere, e.g. non-stationarity of the driving series of each weather feature, makes the predictions unreliable and inaccurate and thus dealing with these data is a very delicate task. For this reason, we have devoted some work to cleaning, feature engineering and preparing the raw data before feeding them into the model architectures. We have managed to preprocess large amounts of data for local stations around the airport, and studied the feasibility of nowcasting rain and wind speed targets using different data sources altogether. The temporal multivariate driving series have high dimensionality and we’ve  made multi-step predictions for the defined target functions.

We study and test different machine learning architectures starting from simple multi-layer perceptrons to convolutional models, and Recurrent Neural Networks (RNN) for temporal and spatio-temporal nowcasting. The Long Short-Term Memory (LSTM) encoder decoder architecture outperforms other models achieving more accurate predictions for each station separately.  Furthermore, to predict the targets in a spatio-temporal scale, we will deploy a 2-layer spatio-temporal stacked LSTM model consisting of independent LSTM models per location in the first LSTM layer, and another LSTM layer to finally predict targets for multi-steps ahead. And the results obtained with different algorithm architectures applied to a dense network of sensors are to be reported.

How to cite: Chkeir, S., Anesiadou, A., and Biondi, R.: Multi-station Multivariate Multi-step Convection Nowcasting with Deep Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8719, https://doi.org/10.5194/egusphere-egu22-8719, 2022.

EGU22-8852 | Presentations | ITS2.6/AS5.1

Time-dependent Hillshades: Dispelling the Shadow Curse of Machine Learning Applications in Earth Observation 

Freddie Kalaitzis, Gonzalo Mateo-Garcia, Kevin Dobbs, Dolores Garcia, Jason Stoker, and Giovanni Marchisio

We show that machine learning models learn and perform better when they know where to expect shadows, through hillshades modeled to the time of imagery acquisition.

Shadows are detrimental to all machine learning applications on satellite imagery. Prediction tasks like semantic / instance segmentation, object detection, counting of rivers, roads, buildings, trees, all rely on crisp edges and colour gradients that are confounded by the presence of shadows in passive optical imagery, which rely on the sun’s illumination for reflectance values.

Hillshading is a standard technique for enriching a mapped terrain with relief effects, which is done by emulating the shadow caused by steep terrain and/or tall vegetation. A hillshade that is modeled to the time of day and year can be easily derived through a basic form of ray tracing on a Digital Terrain Model (DTM) (also known as a bare-earth DEM) or Digital Surface Model (DSM) given the sun's altitude and azimuth angles. In this work, we use lidar-derived DSMs. A DSM-based hillshade conveys a lot more information on shadows than a bare-earth DEM alone, namely any non-terrain vertical features (e.g. vegetation, buildings) resolvable at a 1-m resolution. The use of this level of fidelity of DSM for hillshading and its input to a machine learning model is novel and the main contribution of our work. Any uncertainty over the angles can be captured through a composite multi-angle hillshade, which shows the range where shadows can appear throughout the day.

We show the utility of time-dependent hillshades in the daily mapping of rivers from Very High Resolution (VHR) passive optical and lidar-derived terrain data [1]. Specifically, we leverage the acquisition timestamps within a daily 3m PlanetScope product over a 2-year period. Given a datetime and geolocation, we model the sun’s azimuth and elevation relative to that geolocation at that time of day and year. We can then generate a time-dependent hillshade and therefore locate shadows in any given time within that 2-year period. In our ablation study we show that, out of all the lidar-derived products, the time-dependent hillshades contribute a 8-9% accuracy improvement in the semantic segmentation of rivers. This indicates that a semantic segmentation machine learning model is less prone to errors of commission (false positives), by better disambiguating shadows from dark water.

Time-dependent hillshades are not currently used in ML for EO use-cases, yet they can be useful. All that is needed to produce them is access to high-resolution bare-earth DEMs, like that of the US National 3D Elevation Program covering the entire continental U.S at 1-meter resolution, or creation of DSMs from the lidar point cloud data itself. As the coverage of DSM and/or DEM products expands to more parts of the world, time-dependent hillshades could become as commonplace as cloud masks in EO use cases.


[1] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Freddie Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion. AI for Earth Sciences Workshop, NeurIPS 2020

How to cite: Kalaitzis, F., Mateo-Garcia, G., Dobbs, K., Garcia, D., Stoker, J., and Marchisio, G.: Time-dependent Hillshades: Dispelling the Shadow Curse of Machine Learning Applications in Earth Observation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8852, https://doi.org/10.5194/egusphere-egu22-8852, 2022.

EGU22-9348 | Presentations | ITS2.6/AS5.1

Data-driven modelling of soil moisture: mapping organic soils 

Doran Khamis, Matt Fry, Hollie Cooper, Ross Morrison, and Eleanor Blyth

Improving our understanding of soil moisture and hydraulics is crucial for flood prediction, smart agriculture, modelling nutrient and pollutant spread and evaluating the role of land as a sink or source of carbon and other greenhouse gases. State of the art land surface models rely on poorly-resolved soil textural information to parametrise arbitrarily layered soil models; soils rich in organic matter – key to understanding the role of the land in achieving net zero carbon – are not well modelled. Here, we build a predictive data-driven model of soil moisture using a neural network composed of transformer layers to process time series data from point-sensors (precipitation gauges and sensor-derived estimates of potential evaporation) and convolutional layers to process spatial atmospheric driving data and contextual information (topography, land cover and use, location and catchment behaviour of water bodies). We train the model using data from the COSMOS-UK sensor network and soil moisture satellite products and compare the outputs with JULES to investigate where and why the models diverge. Finally, we predict regions of high peat content and propose a way to combine theory with our data-driven approach to move beyond the sand-silt-clay modelling framework.

How to cite: Khamis, D., Fry, M., Cooper, H., Morrison, R., and Blyth, E.: Data-driven modelling of soil moisture: mapping organic soils, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9348, https://doi.org/10.5194/egusphere-egu22-9348, 2022.

EGU22-9452 | Presentations | ITS2.6/AS5.1

Eddy identification from along track altimeter data using deep learning: EDDY project 

Adili Abulaitijiang, Eike Bolmer, Ribana Roscher, Jürgen Kusche, Luciana Fenoglio, and Sophie Stolzenberger

Eddies are circular rotating water masses, which are usually generated near the large ocean currents, e.g., Gulf Stream. Monitoring eddies and gaining knowledge on eddy statistics over a large region are important for fishery, marine biology studies, and testing ocean models.

At mesoscale, eddies are observed in radar altimetry, and methods have been developed to identify, track and classify them in gridded maps of sea surface height derived from multi-mission data sets. However, this procedure has drawbacks since much information is lost in the gridded maps. Inevitably, the spatial and temporal resolution of the original altimetry data degrades during the gridding process. On the other hand, the task of identifying eddies has been a post-analysis process on the gridded dataset, which is, by far, not meaningful for near-real time applications or forecasts. In the EDDY project at the University of Bonn, we aim to develop methods for identifying eddies directly from along track altimetry data via a machine (deep) learning approach.

At the early stage of the project, we started with gridded altimetry maps to set up and test the machine learning algorithm. The gridded datasets are not limited to multi-mission gridded maps from AVISO, but also include the high resolution (~6 km) ocean modeling simulation dataset (e.g., FESOM, Finite Element Sea ice Ocean Model). Later, the gridded maps are sampled along the real altimetry ground tracks to obtain the single-track altimetry data. Reference data, as the training set for machine learning, will be produced by open-source geometry-based approach (e.g., py-eddy-tracker, Mason et al., 2014) with additional constraints like Okubo-Weiss parameter and Sea Surface Temperature (SST) profile signatures.

In this presentation, we introduce the EDDY project and show the results from the machine learning approach based on gridded datasets for the Gulf stream area for the period 2017, and first results of single-track eddy identification in the region.

How to cite: Abulaitijiang, A., Bolmer, E., Roscher, R., Kusche, J., Fenoglio, L., and Stolzenberger, S.: Eddy identification from along track altimeter data using deep learning: EDDY project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9452, https://doi.org/10.5194/egusphere-egu22-9452, 2022.

DINCAE (Data INterpolating Convolutional Auto-Encoder) is a neural network to reconstruct missing data (e.g. obscured by clouds or gaps between tracks) in satellite data. Contrary to standard image reconstruction (in-painting) with neural networks, this application requires a method to handle missing data (or data with variable accuracy) already in the training phase. Instead of using a cost function based on the mean square error, the neural network (U-Net type of network) is optimized by minimizing the negative log likelihood assuming a Gaussian distribution (characterized by a mean and a variance). As a consequence, the neural network also provides an expected error variance of the reconstructed field (per pixel and per time instance).

 

In this updated version DINCAE 2.0, the code was rewritten in Julia and a new type of skip connection has been implemented which showed superior performance with respect to the previous version. The method has also been extended to handle multivariate data (an example will be shown with sea-surface temperature, chlorophyll concentration and wind fields). The improvement of this network is demonstrated in the Adriatic Sea. 

 

Convolutional networks work usually with gridded data as input. This is however a limitation for some data types used in oceanography and in Earth Sciences in general, where observations are often irregularly sampled.  The first layer of the neural network and the cost function have been modified so that unstructured data can also be used as inputs to obtain gridded fields as output. To demonstrate this, the neural network is applied to along-track altimetry data in the Mediterranean Sea. Results from a 20-year reconstruction are presented and validated. Hyperparameters are determined using Bayesian optimization and minimizing the error relative to a development dataset.

How to cite: Barth, A., Alvera-Azcárate, A., Troupin, C., and Beckers, J.-M.: A multivariate convolutional autoencoder to reconstruct satellite data with an error estimate based on non-gridded observations: application to sea surface height, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9578, https://doi.org/10.5194/egusphere-egu22-9578, 2022.

EGU22-9734 | Presentations | ITS2.6/AS5.1

High Impact Weather Forecasts in Southern Brazil using Ensemble Precipitation Forecasts and Machine Learning 

Cesar Beneti, Jaqueline Silveira, Leonardo Calvetti, Rafael Inouye, Lissette Guzman, Gustavo Razera, and Sheila Paz

In South America, southern parts of Brazil, Paraguay and northeast Argentina are regions particularly prone to high impact weather (intensive lightning activity, high precipitation, hail, flash floods and occasional tornadoes), mostly associated with extra-tropical cyclones, frontal systems and Mesoscale Convective Systems. In the south of Brazil, agricultural industry and electrical power generation are the main economic activities. This region is responsible for 35% of all hydro-power energy production in the country, with long transmission lines to the main consumer regions, which are severely affected by these extreme weather conditions. Intense precipitation events are a common cause of electricity outages in southern Brazil, which ranks as one of the regions in Brazil with the highest annual lightning incidence, as well. Accurate precipitation forecasts can mitigate this kind of problem. Despite improvements in the precipitation estimates and forecasts, some difficulties remain to increase the accuracy, mainly related to the temporal and spatial location of the events. Although several options are available, it is difficult to identify which deterministic forecast is the best or the most reliable forecast. Probabilistic products from large ensemble prediction systems provide a guide to forecasters on how confident they should be about the deterministic forecast, and one approach is using post processing methods such as machine learning (ML), which has been used to identify patterns in historical data to correct for systematic ensemble biases.

In this paper, we present a study, in which we used 20 members from the Global Ensemble Forecast System (GEFS) and 50 members from European Centre for Medium-Range Weather Forecasts (ECMWF)  during 2019-2021,  for seven daily precipitation thresholds: 0-1.0mm, 1.0mm-15mm, 15mm-40mm, 40mm-55mm, 55mm-105mm, 105mm-155mm and over 155mm. A ML algorithm was developed for each day, up to 15 days of forecasts, and several skill scores were calculated, for these daily precipitation thresholds. Initially, to select the best members of the ensembles, a gradient boosting algorithm was applied, in order to improve the skill of the model and reduce processing time. After preprocessing the data, a random forest classifier was used to train the model. Based on hyperparameter sensitivity tests, the random forest required 500 trees, a maximum tree depth of 12 levels, at least 20 samples per leaf node, and the minimization of entropy for splits. In order to evaluate the models, we used a cross-validation on a limited data sample. The procedure has a single parameter that refers to the number of groups that a given data sample is to be split into. In our work we created a twenty-six fold cross validation with 30 days per fold to verify the forecasts. The results obtained by the RF were evaluated through estimated value versus observed value. For the forecast range, we found values above 75% for the precision metrics in the first 3 days, and around 68% in the next days. The recall was also around 80% throughout the entire forecast range,  with promising results to apply this technique operationally, which is our intent in the near future. 

How to cite: Beneti, C., Silveira, J., Calvetti, L., Inouye, R., Guzman, L., Razera, G., and Paz, S.: High Impact Weather Forecasts in Southern Brazil using Ensemble Precipitation Forecasts and Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9734, https://doi.org/10.5194/egusphere-egu22-9734, 2022.

EGU22-9833 | Presentations | ITS2.6/AS5.1

Deep learning for laboratory earthquake prediction and autoregressive forecasting of fault zone stress 

Laura Laurenti, Elisa Tinti, Fabio Galasso, Luca Franco, and Chris Marone

Earthquakes forecasting and prediction have long, and in some cases sordid, histories but recent work has rekindled interest in this area based on advances in short-term early warning, hazard assessment for human induced seismicity and successful prediction of laboratory earthquakes.

In the lab, frictional stick-slip events provide an analog for the full seismic cycle and such experiments have played a central role in understanding the onset of failure and the dynamics of earthquake rupture. Lab earthquakes are also ideal targets for machine learning (ML) techniques because they can be produced in long sequences under a wide range of controlled conditions. Indeed, recent work shows that labquakes can be predicted from fault zone acoustic emissions (AE). Here, we generalize these results and explore additional ML and deep learning (DL) methods for labquake prediction. Key questions include whether improved ML/DL methods can outperform existing models, including prediction based on limited training, or if such methods can successfully forecast beyond a single seismic cycle for aperiodic failure. We describe significant improvements to existing methods of labquake prediction using simple AE statistics (variance) and DL models such as Long-Short Term Memory (LSTM) and Convolution Neural Network (CNN). We demonstrate: 1) that LSTMs and CNNs predict labquakes under a variety of conditions, including pre-seismic creep, aperiodic events and alternating slow and fast events and 2) that fault zone stress can be predicted with fidelity (accuracy in terms of R2 > 0.92), confirming that acoustic energy is a fingerprint of the fault zone stress. We predict also time to start of failure (TTsF) and time to the end of Failure (TTeF). Interestingly, TTeF is successfully predicted in all seismic cycles, while the TTsF prediction varies with the amount of fault creep before an event. We also report on a novel autoregressive forecasting method to predict future fault zone states, focusing on shear stress. This forecasting model is distinct from existing predictive models, which predict only the current state. We compare three modern approaches in sequence modeling framework: LSTM, Temporal Convolution Network (TCN) and Transformer Network (TF). Results are encouraging in forecasting the shear stress at long-term future horizons, autoregressively. Our ML/DL prediction models outperform the state of the art and our autoregressive model represents a novel forecasting framework that could enhance current methods of earthquake forecasting.

How to cite: Laurenti, L., Tinti, E., Galasso, F., Franco, L., and Marone, C.: Deep learning for laboratory earthquake prediction and autoregressive forecasting of fault zone stress, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9833, https://doi.org/10.5194/egusphere-egu22-9833, 2022.

EGU22-10157 | Presentations | ITS2.6/AS5.1

How land cover changes affect ecosystem productivity 

Andreas Krause, Phillip Papastefanou, Konstantin Gregor, Lucia Layritz, Christian S. Zang, Allan Buras, Xing Li, Jingfeng Xiao, and Anja Rammig

Historically, many forests worldwide were cut down and replaced by agriculture. While this substantially reduced terrestrial carbon storage, the impacts of land-use change on ecosystem productivity have not been adequately resolved yet.

Here, we apply the machine learning algorithm Random Forests to predict the potential gross primary productivity (GPP) of forests, grasslands, and croplands around the globe using high-resolution datasets of satellite-derived GPP, land cover, and 20 environmental predictor variables.

With a mean potential GPP of around 2.0 kg C m-2 yr-1 forests are the most productive land cover on two thirds of the global suitable area, while grasslands and croplands are on average 23 and 9% less productive, respectively. These findings are robust against alternative input datasets and algorithms, even though results are somewhat sensitive to the underlying land cover map.

Combining our potential GPP maps with a land-use reconstruction from the Land-Use Harmonization project (LUH2) we estimate that historical agricultural expansion reduced global GPP by around 6.3 Gt C yr-1 (4.4%). This reduction in GPP induced by land cover changes is amplified in some future scenarios as a result of ongoing deforestation but partly reversed in other scenarios due to agricultural abandonment.

Finally, we compare our potential GPP maps to simulations from eight CMIP6 Earth System Models with an explicit representation of land management. While the mean GPP values of the ESM ensemble show reasonable agreement with our estimates, individual Earth System Models simulate large deviations both in terms of mean GPP values of different land cover types as well as in their spatial variations. Reducing these model biases would lead to more reliable simulations concerning the potential of land-based mitigation policies.

How to cite: Krause, A., Papastefanou, P., Gregor, K., Layritz, L., Zang, C. S., Buras, A., Li, X., Xiao, J., and Rammig, A.: How land cover changes affect ecosystem productivity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10157, https://doi.org/10.5194/egusphere-egu22-10157, 2022.

EGU22-10519 | Presentations | ITS2.6/AS5.1 | Highlight

Adaptive Bias Correction for Improved Subseasonal Forecasting 

Soukayna Mouatadid, Paulo Orenstein, Genevieve Flaspohler, Miruna Oprescu, Judah Cohen, Franklyn Wang, Sean Knight, Maria Geogdzhayeva, Sam Levang, Ernest Fraenkel, and Lester Mackey

Improving our ability to forecast the weather and climate is of interest to all sectors of the economy and government agencies from the local to the national level. In fact, weather forecasts 0-10 days ahead and climate forecasts seasons to decades ahead are currently used operationally in decision-making, and the accuracy and reliability of these forecasts has improved consistently in recent decades. However, many critical applications require subseasonal forecasts with lead times in between these two timescales. Subseasonal forecasting—predicting temperature and precipitation 2-6 weeks ahead—is indeed critical for effective water allocation, wildfire management, and drought and flood mitigation. Yet, accurate forecasts for the subseasonal regime are still lacking due to the chaotic nature of weather.

While short-term forecasting accuracy is largely sustained by physics-based dynamical models, these deterministic methods have limited subseasonal accuracy due to chaos. Indeed, subseasonal forecasting has long been considered a “predictability desert” due to its complex dependence on both local weather and global climate variables. Nevertheless, recent large-scale research efforts have advanced the subseasonal capabilities of operational physics-based models, while parallel efforts have demonstrated the value of machine learning and deep learning methods in improving subseasonal forecasting.

To counter the systematic errors of dynamical models at longer lead times, we introduce an adaptive bias correction (ABC) method that combines state-of-the-art dynamical forecasts with observations using machine learning. We evaluate our adaptive bias correction method in the contiguous U.S. over the years 2011-2020 and demonstrate consistent improvement over standard meteorological baselines, state-of-the-art learning models, and the leading subseasonal dynamical models, as measured by root mean squared error and uncentered anomaly correlation skill. When applied to the United States’ operational climate forecast system (CFSv2), ABC improves temperature forecasting skill by 20-47% and precipitation forecasting skill by 200-350%. When applied to the leading subseasonal model from the European Centre for Medium-Range Weather Forecasts (ECMWF), ABC improves temperature forecasting skill by 8-38% and precipitation forecasting skill by 40-80%.

Overall, we find that de-biasing dynamical forecasts with our learned adaptive bias correction method yields an effective and computationally inexpensive strategy for generating improved subseasonal forecasts and building the next generation of subseasonal forecasting benchmarks. To facilitate future subseasonal benchmarking and development, we release our model code through the subseasonal_toolkit Python package and our routinely updated SubseasonalClimateUSA dataset through the subseasonal_data Python package.

How to cite: Mouatadid, S., Orenstein, P., Flaspohler, G., Oprescu, M., Cohen, J., Wang, F., Knight, S., Geogdzhayeva, M., Levang, S., Fraenkel, E., and Mackey, L.: Adaptive Bias Correction for Improved Subseasonal Forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10519, https://doi.org/10.5194/egusphere-egu22-10519, 2022.

EGU22-10711 | Presentations | ITS2.6/AS5.1

A new approach toward integrated inversion of reflection seismic and gravity datasets using deep learning 

Mahtab Rashidifard, Jeremie Giraud, Mark Jessell, and Mark Lindsay

Reflection seismic data, although sparsely distributed due to the high cost of acquisition, is the only type of data that can provide high-resolution images of the crust to reveal deep subsurface structures and the architectural complexity that may vector attention to minerally prospective regions. However, these datasets are not commonly considered in integrated geophysical inversion approaches due to computationally expensive forward modeling and inversion. Common inversion techniques on reflection seismic images are mostly utilized and developed for basin studies and have very limited application for hard-rock studies. Post-stack acoustic impedance inversions, for example, rely a lot on extracted petrophysical information along drilling borehole for depth correction purposes which are not necessarily available. Furthermore, the available techniques do not allow simple, automatic integration of seismic inversion with other geophysical datasets. 

 

 We introduce a new methodology that allows the utilization of the seismic images within the gravity inversion technique with the purpose of 3D boundary parametrization of the subsurface. The proposed workflow is a novel approach for incorporating seismic images into the integrated inversion techniques which relies on the image-ray method for depth-to-time domain conversion of seismic datasets. This algorithm uses a convolutional neural network to iterate over seismic images in time and depth domains. This iterative process is functional to compensate for the low depth resolution of the gravity datasets. We use a generalized level-set technique for gravity inversion to link the interfaces of the units with the depth-converted seismic images. The algorithm has been tested on realistic synthetic datasets generated from scenarios corresponding to different deformation histories. The preliminary results of this study suggest that post-stack seismic images can be utilized in integrated geophysical inversion algorithms without the need to run computationally expensive full wave-form inversions.  

How to cite: Rashidifard, M., Giraud, J., Jessell, M., and Lindsay, M.: A new approach toward integrated inversion of reflection seismic and gravity datasets using deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10711, https://doi.org/10.5194/egusphere-egu22-10711, 2022.

EGU22-11043 | Presentations | ITS2.6/AS5.1

Framework for the deployment of DNNs in remote sensing inversion algorithms applied to Copernicus Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P) 

Fabian Romahn, Victor Molina Garcia, Ana del Aguila, Ronny Lutz, and Diego Loyola

In remote sensing, the quantities of interest (e.g. the composition of the atmosphere) are usually not directly observable but can only be inferred indirectly via the measured spectra. To solve these inverse problems, retrieval algorithms are applied that usually depend on complex physical models, so-called radiative transfer models (RTMs). RTMs are very accurate, however also computationally very expensive and therefore often not feasible in combination with the strict time requirements of operational processing of satellite measurements. With the advances in machine learning, the methods of this field, especially deep neural networks (DNN), have become very promising for accelerating and improving the classical remote sensing retrieval algorithms. However, their application is not straightforward but instead quite challenging as there are many aspects to consider and parameters to optimize in order to achieve satisfying results.

In this presentation we show a general framework for replacing the RTM, used in an inversion algorithm, with a DNN that offers sufficient accuracy while at the same time increases the processing performance by several orders of magnitude. The different steps, sampling and generation of the training data, the selection of the DNN hyperparameters, the training and finally the integration of the DNN into an operational environment are explained in detail. We will also focus on optimizing the efficiency of each step: optimizing the generation of training samples through smart sampling techniques, accelerating the training data generation through parallelization and other optimizations of the RTM, application of tools for the DNN hyperparameter optimization as well as the use of automation tools (source code generation) and appropriate interfaces for the efficient integration in operational processing systems.

This procedure has been continuously developed throughout the last years and as a use case, it will be shown how it has been applied in the operational retrieval of cloud properties for the Copernicus satellite sensors Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P).

How to cite: Romahn, F., Molina Garcia, V., del Aguila, A., Lutz, R., and Loyola, D.: Framework for the deployment of DNNs in remote sensing inversion algorithms applied to Copernicus Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11043, https://doi.org/10.5194/egusphere-egu22-11043, 2022.

EGU22-11420 | Presentations | ITS2.6/AS5.1

Histroy Matching for the tuning of coupled models: experiments on the Lorenz 96 model 

Redouane Lguensat, Julie Deshayes, and Venkatramani Balaji

The process of relying on experience and intuition to find good sets of parameters, commonly referred to as "parameter tuning" keeps having a central role in the roadmaps followed by dozens of modeling groups involved in community efforts such as the Coupled Model Intercomparison Project (CMIP). 

In this work, we study a tool from the Uncertainty Quantification community that started recently to draw attention in climate modeling: History Matching also referred to as "Iterative Refocussing". The core idea of History Matching is to run several simulations with different set of parameters and then use observed data to rule-out any parameter settings which are "implausible". Since climate simulation models are computationally heavy and do not allow testing every possible parameter setting, we employ an emulator that can be a cheap and accurate replacement. Here a machine learning algorithm, namely, Gaussian Process Regression is used for the emulating step. History Matching is then a good example where the recent advances in machine learning can be of high interest to climate modeling.

One objective of this study is to evaluate the potential for history matching to tune a climate system with multi-scale dynamics. By using a toy climate model, namely, the Lorenz 96 model, and producing experiments in perfect-model setting, we explore different types of applications of HM and highlight the strenghts and challenges of using such a technique. 

How to cite: Lguensat, R., Deshayes, J., and Balaji, V.: Histroy Matching for the tuning of coupled models: experiments on the Lorenz 96 model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11420, https://doi.org/10.5194/egusphere-egu22-11420, 2022.

EGU22-11465 | Presentations | ITS2.6/AS5.1

Quantile machine learning models for predicting European-wide, high resolution fine-mode Aerosol Optical Depth (AOD) based on ground-based AERONET and satellite AOD data 

Zhao-Yue Chen, Raul Méndez-Turrubiates, Hervé Petetin, Aleks Lacima, Albert Soret Miravet, Carlos Pérez García-Pando, and Joan Ballester

Air pollution is a major environmental risk factor for human health. Among the different air pollutants, Particulate Matter (PM) arises as the most prominent one, with increasing health effects over the last decades. According to the Global Burden of Disease, PM contributed to 4.14 million premature deaths globally in 2019, over twice as much as in 1990 (2.04 million). With these numbers in mind, the assessment of ambient PM exposure becomes a key issue in environmental epidemiology. However, the limited number of ground-level sites measuring daily PM values is a major constraint for the development of large-scale, high-resolution epidemiological studies.

In the last five years, there has been a growing number of initiatives estimating ground-level PM concentrations based on satellite Aerosol Optical Depth (AOD) data, representing a low-cost alternative with higher spatial coverage compared to ground-level measurements. At present, the most popular AOD product is NASA’s MODIS (Moderate Resolution Imaging Spectroradiometer), but the data that it provides is restricted to Total Aerosol Optical Depth (TAOD). Compared with TAOD, Fine-mode Aerosol Optical Depth (FAOD) better describes the distribution of small-diameter particles (e.g. PM10 and PM2.5), which are generally those associated with anthropogenic activity. Complementarily, AERONET (AErosol RObotic NETwork, which is the network of ground-based sun photometers), additionally provide Fine- and Coarse-mode Aerosol Optical Depth (FAOD and CAOD) products based on Spectral Deconvolution Algorithms (SDA).

Within the framework of the ERC project EARLY-ADAPT (https://early-adapt.eu/), which aims to disentangle the association between human health, climate variability and air pollution to better estimate the early adaptation response to climate change, here we develop quantile machine learning models to further advance in the association between AERONET FAOD and satellite AOD over Europe during the last two decades. Due to large missing data form satellite estimations, we also included the AOD estimates from ECMWF’s Copernicus Atmosphere Monitoring Service Global Reanalysis (CAMSRA) and NASA’s Modern-Era Retrospective Analysis for Research and Applications v2 (MERRA-2), together with atmosphere, land and ocean variables such as boundary layer height, downward UV radiation and cloud cover from ECMWF’s ERA5-Land.

The models were thoroughly validated with spatial cross-validation. Preliminary results show that the R2 of the three AOD estimates (TAOD, FAOD and CAOD) predicted with quantile machine learning models range between 0.61 and 0.78, and the RMSE between 0.02 and 0.03. For the Pearson correlation with ground-level PM2.5, the predicted FAOD is highest (0.38), while 0.18, 0.11 and 0.09 are for Satellite, MERRA-2, CAMSRA AOD, respectively. This study provides three useful indicators for further estimating PM, which could improve our understanding of air pollution in Europe and open new avenues for large-scale, high-resolution environmental epidemiology studies.

How to cite: Chen, Z.-Y., Méndez-Turrubiates, R., Petetin, H., Lacima, A., Soret Miravet, A., Pérez García-Pando, C., and Ballester, J.: Quantile machine learning models for predicting European-wide, high resolution fine-mode Aerosol Optical Depth (AOD) based on ground-based AERONET and satellite AOD data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11465, https://doi.org/10.5194/egusphere-egu22-11465, 2022.

EGU22-11924 | Presentations | ITS2.6/AS5.1

Automated detection and classification of synoptic scale fronts from atmospheric data grids 

Stefan Niebler, Peter Spichtinger, Annette Miltenberger, and Bertil Schmidt

Automatic determination of fronts from atmospheric data is an important task for weather prediction as well as for research of synoptic scale phenomena. We developed a deep neural network to detect and classify fronts from multi-level ERA5 reanalysis data. Model training and prediction is evaluated using two different regions covering Europe and North America with data from two weather services. Due to a label deformation step performed during training we are able to directly generate frontal lines with no further thinning during post processing. Our network compares well against the weather service labels with a Critical Success Index higher than 66.9% and a Object Detection Rate of more than 77.3%. Additionally the frontal climatologies generated from our networks ouput are highly correlated (greater than 77.2%) to climatologies created from weather service data. Evaluation of cross sections of our detection results provide further insight in the characteristics of our predicted fronts and show that our networks classification is physically plausible.

How to cite: Niebler, S., Spichtinger, P., Miltenberger, A., and Schmidt, B.: Automated detection and classification of synoptic scale fronts from atmospheric data grids, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11924, https://doi.org/10.5194/egusphere-egu22-11924, 2022.

EGU22-12043 | Presentations | ITS2.6/AS5.1

A Domain-Change Approach to the Semantic Labelling of Remote Sensing Images 

Chandrabali Karmakar, Gottfried Schwartz, Corneliu Octavian Dumitru, and Mihai Datcu

For many years, image classification – mainly based on pixel brightness statistics – has been among the most popular remote sensing applications. However, during recent years, many users were more and more interested in the application-oriented semantic labelling of remotely sensed image objects being depicted in given images.


In parallel, the development of deep learning algorithms has led to several powerful image classification and annotation tools that became popular in the remote sensing community. In most cases, these publicly available tools combine efficient algorithms with expert knowledge and/or external information ingested during an initial training phase, and we often encounter two alternative types of deep learning approaches, namely Autoencoders (AEs) and Convolutional Neural Networks (CNNs). Both approaches try to convert the pixel data of remote sensing images into semantic maps of the imaged areas. In our case, we made an attempt to provide an efficient new semantic annotation tool that helps in the semantic interpretation of newly recorded images with known and/or possibly unknown content.


Typical cases are remote sensing images depicting unexpected and hitherto uncharted phenomena such as flooding events or destroyed infrastructure. When we resort to the commonly applied AE or CNN software packages we cannot expect that existing statistics, or a few initial ground-truth annotations made by an image interpreter, will automatically lead to a perfect understanding of the image content. Instead, we have to discover and combine a number of additional relationships that define the actual content of a selected image and many of its characteristics.

Our approach consists of a two-stage domain-change approach where we first convert an image into a purely mathematical ‘topic representation’ initially introduced by Blei [1]. This representation provides statistics-based topics that do not yet require final application-oriented labelling describing physical categories or phenomena and support the idea of explainable machine learning [2]. Then, during a second stage, we try to derive physical image content categories by exploiting a weighted multi-level neural network approach that converts weighted topics into individual application-oriented labels. This domain-changing learning stage limits label noise and is initially supported by an image interpreter allowing the joint use of pixel statistics and expert knowledge [3]. The activity of the image interpreter can be limited to a few image patches. We tested our approach on a number of different use cases (e.g., polar ice, agriculture, natural disasters) and found that our concept provides promising results.  


[1] D.M. Blei, A.Y. Ng, and M.I. Jordan, (2003). Latent Dirichlet Allocation, Journal of Machine Learning Research, Vol. 3, pp. 993-1022.
[2] C. Karmakar, C.O. Dumitru, G. Schwarz, and M. Datcu (2020). Feature-free explainable data mining in SAR images using latent Dirichlet allocation, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, pp. 676-689.
[3] C.O. Dumitru, G. Schwarz, and M. Datcu (2021). Semantic Labelling of Globally Distributed Urban and Non-Urban Satellite Images Using High-Resolution SAR Data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 15, pp. 6009-6068.

How to cite: Karmakar, C., Schwartz, G., Dumitru, C. O., and Datcu, M.: A Domain-Change Approach to the Semantic Labelling of Remote Sensing Images, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12043, https://doi.org/10.5194/egusphere-egu22-12043, 2022.

EGU22-12489 | Presentations | ITS2.6/AS5.1

“Fully-automated” clustering method for stress inversions (CluStress) 

Lukács Kuslits, Lili Czirok, and István Bozsó

As it is well-known, stress fields are responsible for earthquake formation. In order to analyse stress relations in a study area using focal mechanisms’ (FMS) inversions, it is vital to consider three fundamental criteria:

(1)       The investigated area is characterized by a homogeneous stress field.

(2)       The earthquakes occur with variable directions on pre-existing faults.

(3)       The deviation of the fault slip vector from the shear stress vector is minimal (Wallace-Bott hypothesis).

The authors have attempted to develop a “fully-automated” algorithm to carry out the classification of the earthquakes as a prerequisite of stress estimations. This algorithm does not call for the setting of hyper-parameters, thus subjectivity can be reduced significantly and the running time can also decrease. Nevertheless, there is an optional hyper-parameter that is eligible to filter outliers, isolated points (earthquakes) in the input dataset.

In this presentation, they show the operation of this algorithm in case of synthetic datasets consisting of different groups of FMS and a real seismic dataset. The latter come from a survey area in the earthquake-prone Vrancea-zone (Romania). This is a relatively small region (around 30*70 km) in the external part of SE-Carpathians where the distribution of the seismic events is quite dense and heterogeneous.

It shall be noted that though the initial results are promising, further developments are still necessary. The source codes are soon to be uploaded to a public GitHub repository which will be available for the whole scientific community.

How to cite: Kuslits, L., Czirok, L., and Bozsó, I.: “Fully-automated” clustering method for stress inversions (CluStress), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12489, https://doi.org/10.5194/egusphere-egu22-12489, 2022.

EGU22-12549 | Presentations | ITS2.6/AS5.1

Joint calibration and mapping of satellite altimetry data using trainable variaitional models 

Quentin Febvre, Ronan Fablet, Julien Le Sommer, and Clément Ubelmann

Satellite radar altimeters are a key source of observation of ocean surface dynamics. However, current sensor technology and mapping techniques do not yet allow to systematically resolve scales smaller than 100km. With their new sensors, upcoming wide-swath altimeter missions such as SWOT should help resolve finer scales. Current mapping techniques rely on the quality of the input data, which is why the raw data go through multiple preprocessing stages before being used. Those calibration stages are improved and refined over many years and represent a challenge when a new type of sensor start acquiring data.

We show how a data-driven variational data assimilation framework could be used to jointly learn a calibration operator and an interpolator from non-calibrated data . The proposed framework significantly outperforms the operational state-of-the-art mapping pipeline and truly benefits from wide-swath data to resolve finer scales on the global map as well as in the SWOT sensor geometry.

 

How to cite: Febvre, Q., Fablet, R., Le Sommer, J., and Ubelmann, C.: Joint calibration and mapping of satellite altimetry data using trainable variaitional models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12549, https://doi.org/10.5194/egusphere-egu22-12549, 2022.

EGU22-12574 | Presentations | ITS2.6/AS5.1 | Highlight

SWIFT-AI: Significant Speed-up in Modelling the Stratospheric Ozone Layer 

Helge Mohn, Daniel Kreyling, Ingo Wohltmann, Ralph Lehmann, Peter Maass, and Markus Rex

Common representations of the stratospheric ozone layer in climate modeling are widely considered only in a very simplified way. Neglecting the mutual interactions of ozone with atmospheric temperature and dynamics has the effect of making climate projections less accurate. Although, more elaborate and interactive models of the stratospheric ozone layer are available, they require far too much computation time to be coupled with climate models. Our aim with this project was to break new ground and pursue an interdisciplinary strategy that spans the fields of machine learning, atmospheric physics and climate modelling.

In this work, we present an implicit neural representation of the extrapolar stratospheric ozone chemistry (SWIFT-AI). An implicitly defined hyperspace of the stratospheric ozone chemistry offers a continuous and even differentiable representation that can be parameterized by artificial neural networks. We analysed different parameter-efficient variants of multilayer perceptrons. This was followed by an intensive, as far as possible energy-efficient search for hyperparameters involving Bayesian optimisation and early stopping techniques.

Our data source is the Lagrangian chemistry and transport model ATLAS. Using its full model of stratospheric ozone chemistry, we focused on simulating a wide range of stratospheric variability that will occur in future climate (e.g. temperature and meridional circulation changes). We conducted a simulation for several years and created a data-set with over 200E+6 input and output pairs. Each output is the 24h ozone tendency of a trajectory. We performed a dimensionality reduction of the input parameters by using the concept of chemical families and by performing a sensitivity analysis to choose a set of robust input parameters.

We coupled the resulting machine learning models with the Lagrangian chemistry and transport model ATLAS, substituting the full stratospheric chemistry model. We validated a two-year simulation run by comparing to the differences in accuracy and computation time from both the full stratospheric chemistry model and the previous polynomial approach of extrapolar SWIFT. We found that SWIFT-AI consistently outperforms the previous polynomial approach of SWIFT, both in terms of test data and simulation results. We discovered that the computation time of SWIFT-AI is more than twice as fast as the previous polynomial approach SWIFT and 700 times faster than the full stratospheric chemistry scheme of ATLAS, resulting in minutes instead of weeks of computation time per model year – a speed-up of several orders of magnitude.

To ensure reproducibility and transparency, we developed a machine learning pipeline, published a benchmark dataset and made our repository open to the public.

In summary, we could show that the application of state-of-the-art machine learning methods to the field of atmospheric physics holds great potential. The achieved speed-up of an interactive and very precise ozone layer enables a novel way of representing the ozone layer in climate models. This in turn will increase the quality of climate projections, which are crucial for policy makers and of great importance for our planet.

How to cite: Mohn, H., Kreyling, D., Wohltmann, I., Lehmann, R., Maass, P., and Rex, M.: SWIFT-AI: Significant Speed-up in Modelling the Stratospheric Ozone Layer, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12574, https://doi.org/10.5194/egusphere-egu22-12574, 2022.

Recently, an increase in forecast skill of the seasonal climate forecast for winter in Europe has been achieved through an ensemble subsampling approach by way of predicting the mean winter North Atlantic Oscillation (NAO) index through linear regression (based on the autumn state of the four predictors sea surface temperature, Arctic sea ice volume, Eurasian snow depth and stratospheric temperature) and the sampling of the ensemble members which are able to reproduce this NAO state. This thesis shows that the statistical prediction of the NAO index can be further improved via nonlinear methods using the same predictor variables as in the linear approach. This likely also leads to an increase in seasonal climate forecast skill. The data used for the calculations stems from the global reanalysis by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5. The available time span for use in this thesis covered only 40 years from 1980 till 2020, hence it was important to use a method that still yields statistically significant and meaningful results under those circumstances. The nonlinear method chosen was k-nearest neighbor, which is a simple, yet powerful algorithm when there is not a lot of data available. Compared to other methods like neural networks it is easy to interpret. The resulting method has been developed and tested in a double cross-validation setting. While sea ice in the Barents-Kara sea in September-October shows the most predictive capability for the NAO index in the subsequent winter as a single predictor, the highest forecast skill is achieved through a combination of different predictor variables.

How to cite: Hauke, C., Ahrens, B., and Dalelane, C.: Prediction of the North Atlantic Oscillation index for the winter months December-January-February via nonlinear methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12628, https://doi.org/10.5194/egusphere-egu22-12628, 2022.

EGU22-12765 | Presentations | ITS2.6/AS5.1

Supervised machine learning to estimate instabilities in chaotic systems: computation of local Lyapunov exponents 

Daniel Ayers, Jack Lau, Javier Amezcua, Alberto Carrassi, and Varun Ojha

Weather and climate are well known exemplars of chaotic systems exhibiting extreme sensitivity to initial conditions. Initial condition errors are subject to exponential growth on average, but the rate and the characteristic of such growth is highly state dependent. In an ideal setting where the degree of predictability of the system is known in real-time, it may be possible and beneficial to take adaptive measures. For instance a local decrease of predictability may be counteracted by increasing the time- or space-resolution of the model computation or the ensemble size in the context of ensemble-based data assimilation or probabilistic forecasting.

Local Lyapunov exponents (LLEs) describe growth rates along a finite-time section of a system trajectory. This makes the LLEs the ideal quantities to measure the local degree of predictability, yet a main bottleneck for their real-time use in  operational scenarios is the huge computational cost. Calculating LLEs involves computing a long trajectory of the system, propagating perturbations with the tangent linear model, and repeatedly orthogonalising them. We investigate if machine learning (ML) methods can estimate the LLEs based only on information from the system’s solution, thus avoiding the need to evolve perturbations via the tangent linear model. We test the ability of four algorithms (regression tree, multilayer perceptron, convolutional neural network and long short-term memory network) to perform this task in two prototypical low dimensional chaotic dynamical systems. Our results suggest that the accuracy of the ML predictions is highly dependent upon the nature of the distribution of the LLE values in phase space: large prediction errors occur in regions of the attractor where the LLE values are highly non-smooth.  In line with classical dynamical systems studies, the neutral LLE is more difficult to predict. We show that a comparatively simple regression tree can achieve performance that is similar to sophisticated neural networks, and that the success of ML strategies for exploiting the temporal structure of data depends on the system dynamics.

How to cite: Ayers, D., Lau, J., Amezcua, J., Carrassi, A., and Ojha, V.: Supervised machine learning to estimate instabilities in chaotic systems: computation of local Lyapunov exponents, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12765, https://doi.org/10.5194/egusphere-egu22-12765, 2022.

EGU22-13228 | Presentations | ITS2.6/AS5.1 | Highlight

Developing a data-driven ocean forecast system 

Rachel Furner, Peter Haynes, Dan Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh

The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, process-based weather and climate models (simulators) represent the best tools we have to predict, understand and potentially mitigate the impacts of climate change and extreme weather. However, these models are incredibly complex and require huge amounts of High Performance Computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models by developing data-driven emulators.

Here I discuss recent work to develop a data-driven model of the ocean, an integral part of the weather and climate system. Much recent progress has been made with developing data-driven forecast systems of atmospheric weather, highlighting the promise of these systems. These techniques can also be applied to the ocean, however modelling of the ocean poses some fundamental differences and challenges in comparison to modelling the atmosphere, for example, oceanic flow is bathymetrically constrained across a wide range of spatial and temporal scales.

We train a neural network on the output from an expensive process-based simulator of an idealised channel configuration of oceanic flow. We show the model is able to learn well the complex dynamics of the system, replicating the mean flow and details within the flow over single prediction steps. We also see that when iterating the model, predictions remain stable, and continue to match the ‘truth’ over a short-term forecast period, here around a week.

 

How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean forecast system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13228, https://doi.org/10.5194/egusphere-egu22-13228, 2022.

EGU22-591 | Presentations | ITS2.7/AS5.2

Identifying precursors for extreme stratospheric polar vortex events  using an explainable neural network 

Zheng Wu, Tom Beucler, Raphaël de Fondeville, Eniko Székely, Guillaume Obozinski, William Ball, and Daniela Domeisen

The winter stratospheric polar vortex exhibits considerable variability in both magnitude and zonal wave structure, which arises in part from stratosphere-troposphere coupling associated with tropospheric precursors and can result in extreme polar vortex events. These extremes can subsequently influence weather in the troposphere and thus are important sources of surface prediction. However, the predictability limit of these extreme events is around 1-2 weeks in the state-of-the-art prediction system. In order to explore and improve the predictability limit of the extreme vortex events, in this study, we train an artificial neural network (ANN) to model stratospheric polar vortex anomalies and to identify strong and weak stratospheric vortex events. To pinpoint the origins of the stratospheric anomalies, we then employ two neural network visualization methods, SHapley Additive exPlanations (SHAP) and Layerwise Relevance Propagation (LRP), to uncover feature importance in the input variables (e.g., geopotential height and background zonal wind). The extreme vortex events can be identified by the ANN with an averaged accuracy of 60-80%. For the correctly identified extreme events, the composite of the feature importance of the input variables shows spatial patterns consistent with the precursors found for extreme stratospheric events in previous studies. This consistency provides confidence that the ANN is able to identify reliable indicators for extreme stratospheric vortex events and that it could help to identify the role of the previously found precursors, such as the sea level pressure anomalies associated with the Siberian high. In addition to the composite of all the events, the feature importance for each of the individual events further reveals the physical structures in the input variables (such as the locations of the geopotential height anomalies) that are specific to that event. Our results show the potential of explainable neural networks techniques in understanding and predicting the stratospheric variability and extreme events, and in searching for potential precursors for these events on subseasonal time scales. 

How to cite: Wu, Z., Beucler, T., de Fondeville, R., Székely, E., Obozinski, G., Ball, W., and Domeisen, D.: Identifying precursors for extreme stratospheric polar vortex events  using an explainable neural network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-591, https://doi.org/10.5194/egusphere-egu22-591, 2022.

EGU22-676 | Presentations | ITS2.7/AS5.2

A two-stage machine learning framework using global satellite data of cloud classes for process-oriented model evaluation 

Arndt Kaps, Axel Lauer, Gustau Camps-Valls, Pierre Gentine, Luis Gómez-Chova, and Veronika Eyring

Clouds play a key role in weather and climate but are quite challenging to simulate with global climate models as the relevant physics include non-linear processes on scales covering several orders of magnitude in both the temporal and spatial dimensions. The numerical representation of clouds in global climate models therefore requires a high degree of parameterization, which makes a careful evaluation a prerequisite not only for assessing the skill in reproducing observed climate but also for building confidence in projections of future climate change. Current methods to achieve this usually involve the comparison of multiple large-scale physical properties in the model output to observational data. Here, we introduce a two-stage data-driven machine learning framework for process-oriented evaluation of clouds in climate models based directly on widely known cloud types. The first step relies on CloudSat satellite data to assign cloud labels in line with cloud types defined by the World Meteorological Organization (WMO) to MODIS pixels using deep neural networks. Since the method is supervised and trained on labels provided by CloudSat, the predicted cloud types remain objective and do not require a posteriori labeling. The second step consists of a regression algorithm that predicts fractional cloud types from retrieved cloud physical variables. This step aims to ensure that the method can be used with any data set providing physical variables comparable to MODIS. In particular, we use a Random Forest regression that acts as a transfer model to evaluate the spatially relatively coarse output of climate models and allows the use of varying input features. As a proof of concept, the method is applied to coarse grained ESA Cloud CCI data. The predicted cloud type distributions are physically consistent and show the expected features of the different cloud types. This demonstrates how advanced observational products can be used with this method to obtain cloud type distributions from coarse data, allowing for a process-based evaluation of clouds in climate models.

How to cite: Kaps, A., Lauer, A., Camps-Valls, G., Gentine, P., Gómez-Chova, L., and Eyring, V.: A two-stage machine learning framework using global satellite data of cloud classes for process-oriented model evaluation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-676, https://doi.org/10.5194/egusphere-egu22-676, 2022.

EGU22-696 | Presentations | ITS2.7/AS5.2 | Highlight

Latent Linear Adjustment Autoencoder: a novel method for estimating dynamic precipitation at high resolution 

Christina Heinze-Deml, Sebastian Sippel, Angeline G. Pendergrass, Flavio Lehner, and Nicolai Meinshausen

A key challenge in climate science is to quantify the forced response in impact-relevant variables such as precipitation against the background of internal variability, both in models and observations. Dynamical adjustment techniques aim to remove unforced variability from a target variable by identifying patterns associated with circulation, thus effectively acting as a filter for dynamically induced variability. The forced contributions are interpreted as the variation that is unexplained by circulation. However, dynamical adjustment of precipitation at local scales remains challenging because of large natural variability and the complex, nonlinear relationship between precipitation and circulation particularly in heterogeneous terrain. 

In this talk, I will present the Latent Linear Adjustment Autoencoder (LLAAE), a novel statistical model that builds on variational autoencoders. The Latent Linear Adjustment Autoencoder enables estimation of the contribution of a coarse-scale atmospheric circulation proxy to daily precipitation at high resolution and in a spatially coherent manner. To predict circulation-induced precipitation, the LLAAE combines a linear component, which models the relationship between circulation and the latent space of an autoencoder, with the autoencoder's nonlinear decoder. The combination is achieved by imposing an additional penalty in the cost function that encourages linearity between the circulation field and the autoencoder's latent space, hence leveraging robustness advantages of linear models as well as the flexibility of deep neural networks. 

We show that our model predicts realistic daily winter precipitation fields at high resolution based on a 50-member ensemble of the Canadian Regional Climate Model at 12 km resolution over Europe, capturing, for instance, key orographic features and geographical gradients. Using the Latent Linear Adjustment Autoencoder to remove the dynamic component of precipitation variability, forced thermodynamic components are expected to remain in the residual, which enables the uncovering of forced precipitation patterns of change from just a few ensemble members. We extend this to quantify the forced pattern of change conditional on specific circulation regimes. 

Future applications could include, for instance, weather generators emulating climate model simulations of regional precipitation, detection and attribution at subcontinental scales, or statistical downscaling and transfer learning between models and observations to exploit the typically much larger sample size in models compared to observations.

How to cite: Heinze-Deml, C., Sippel, S., Pendergrass, A. G., Lehner, F., and Meinshausen, N.: Latent Linear Adjustment Autoencoder: a novel method for estimating dynamic precipitation at high resolution, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-696, https://doi.org/10.5194/egusphere-egu22-696, 2022.

EGU22-722 | Presentations | ITS2.7/AS5.2 | Highlight

Climate-Invariant, Causally Consistent Neural Networks as Robust Emulators of Subgrid Processes across Climates 

Tom Beucler, Fernando Iglesias-Suarez, Veronika Eyring, Michael Pritchard, Jakob Runge, and Pierre Gentine

Data-driven algorithms, in particular neural networks, can emulate the effects of unresolved processes in coarse-resolution Earth system models (ESMs) if trained on high-resolution simulation or observational data. However, they can (1) make large generalization errors when evaluated in conditions they were not trained on; and (2) trigger instabilities when coupled back to ESMs.

First, we propose to physically rescale the inputs and outputs of neural networks to help them generalize to unseen climates. Applied to the offline parameterization of subgrid-scale thermodynamics (convection and radiation) in three distinct climate models, we show that rescaled or "climate-invariant" neural networks make accurate predictions in test climates that are 8K warmer than their training climates. Second, we propose to eliminate spurious causal relations between inputs and outputs by using a recently developed causal discovery framework (PCMCI). For each output, we run PCMCI on the inputs time series to identify the reduced set of inputs that have the strongest causal relationship with the output. Preliminary results show that we can reach similar levels of accuracy by training one neural network per output with the reduced set of inputs; stability implications when coupled back to the ESM are explored.

Overall, our results suggest that explicitly incorporating physical knowledge into data-driven models of Earth system processes may improve their ability to generalize across climate regimes, while quantifying causal associations to select the optimal set of inputs may improve their consistency and stability.

How to cite: Beucler, T., Iglesias-Suarez, F., Eyring, V., Pritchard, M., Runge, J., and Gentine, P.: Climate-Invariant, Causally Consistent Neural Networks as Robust Emulators of Subgrid Processes across Climates, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-722, https://doi.org/10.5194/egusphere-egu22-722, 2022.

EGU22-1065 | Presentations | ITS2.7/AS5.2 | Highlight

Skilful US Soy-yield forecasts at pre-sowing lead-times 

Sem Vijverberg, Dim Coumou, and Raed Hamed

Soy harvest failure events can severely impact farmers, insurance companies and raise global prices. Reliable seasonal forecasts of mis-harvests would allow stakeholders to prepare and take appropriate early action. However, especially for farmers, the reliability and lead-time of current prediction systems provide insufficient information to justify for within-season adaptation measures. Recent innovations increased our ability to generate reliable statistical seasonal forecasts. Here, we combine these innovations to predict the 1-3 poor soy harvest years in eastern US. We first use a clustering algorithm to spatially aggregate crop producing regions within the eastern US that are particularly sensitive to hot-dry weather conditions. Next, we use observational climate variables (sea surface temperature (SST) and soil moisture) to extract precursor timeseries at multiple lags. This allows the machine learning model to learn the low-frequency evolution, which carries important information for predictability. A selection based on causal inference allows for physically interpretable precursors. We show that the robust selected predictors are associated with the evolution of the horseshoe Pacific SST pattern, in line with previous research. We use the state of the horseshoe Pacific to identify years with enhanced predictability. We achieve very high forecast skill of poor harvests events, even 3 months prior to sowing, using a strict one-step-ahead train-test splitting. Over the last 25 years, 90% of the predicted events in February were correct. When operational, this forecast would enable farmers (and insurance/trading companies) to make informed decisions on adaption measures, e.g., selecting more drought-resistant cultivars, invest in insurance, change planting management.

How to cite: Vijverberg, S., Coumou, D., and Hamed, R.: Skilful US Soy-yield forecasts at pre-sowing lead-times, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1065, https://doi.org/10.5194/egusphere-egu22-1065, 2022.

EGU22-1835 | Presentations | ITS2.7/AS5.2

Using Deep Learning for a High-Precision Analysis of Atmospheric Rivers in a High-Resolution Large Ensemble Climate Dataset 

Timothy Higgins, Aneesh Subramanian, Andre Graubner, Lukas Kapp-Schwoerer, Karthik Kashinath, Sol Kim, Peter Watson, Will Chapman, and Luca Delle Monache

Atmospheric rivers (ARs) are elongated corridors of water vapor in the lower Troposphere that cause extreme precipitation over many coastal regions around the globe. They play a vital role in the water cycle in the western US, fueling most extreme west coast precipitation and sometimes accounting for more than 50% of total annual west coast precipitation (Gershunov et al. 2017). Severe ARs are associated with extreme flooding and damages while weak ARs are typically more beneficial to our society as they bring much needed drought relief.

Precipitation is particularly difficult to predict in traditional climate models.  Predicting water vapor is more reliable (Lavers et al. 2016), allowing IVT (integrated vapor transport) and ARs to be a favorable method for understanding changing patterns in precipitation (Johnson et al. 2009).  There are a variety of different algorithms used to track ARs due to their relatively diverse definitions (Shields et al. 2018). The Atmospheric River Tracking Intercomparison Project (ARTMIP) organizes and provides information on all of the widely accepted algorithms that exist. Nearly all of the algorithms included in ARTMIP rely on absolute and relative numerical thresholds, which can often be computationally expensive and have a large memory footprint. This can be particularly problematic in large climate datasets. The vast majority of algorithms also heavily factor in wind velocity at multiple vertical levels to track ARs, which is especially difficult to store in climate models and is typically not output at the temporal resolution that ARs occur.

A recent alternative way of tracking ARs is through the use of machine learning. There are a variety of neural networks that are commonly applied towards identifying objects in cityscapes via semantic segmentation. The first of these neural networks that was applied towards detecting ARs is DeepLabv3+ (Prabhat et al. 2020). DeepLabv3+ is a state of the art model that demonstrates one of the highest performances of any present day neural network when tasked with the objective of identifying objects in cityscapes (Wu et al. 2019). We employ a light-weight convolutional neural network adapted from CGNet (Kapp-Schwoerer et al. 2020) to efficiently track these severe events without using wind velocity at all vertical levels as a predictor variable. When applied to cityscapes, CGNet's greatest advantage is its performance relative to its memory footprint (Wu et al. 2019). It has two orders of magnitude less parameters than DeepLabv3+ and is computationally less expensive. This can be especially useful when identifying ARs in large datasets. Convolutional neural networks have not been used to track ARs in a regional domain. This will also be the first study to demonstrate the performance of this neural network on a regional domain by providing an objective analysis of its consistency with eight different ARTMIP algorithms.

How to cite: Higgins, T., Subramanian, A., Graubner, A., Kapp-Schwoerer, L., Kashinath, K., Kim, S., Watson, P., Chapman, W., and Delle Monache, L.: Using Deep Learning for a High-Precision Analysis of Atmospheric Rivers in a High-Resolution Large Ensemble Climate Dataset, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1835, https://doi.org/10.5194/egusphere-egu22-1835, 2022.

EGU22-2012 | Presentations | ITS2.7/AS5.2

Gap filling in air temperature series by matrix completion methods 

Benoît Loucheur, Pierre-Antoine Absil, and Michel Journée

Quality control of meteorological data is an important part of atmospheric analysis and prediction, as missing or erroneous data can have a negative impact on the accuracy of these environmental products.

In Belgium, the Royal Meteorological Institute (RMI) is the national meteorological service that provide weather and climate services based on observations and scientific research. RMI collects and archives meteorological observations in Belgium since the 19th century. Currently, air temperature is monitored in Belgium in about 30 synoptic automatic weather stations (AWS) as well as in 110 manual climatological stations. In the latter stations, a volunteer observer records every morning at 8 o'clock the daily extreme air temperatures. All observations are routinely checked for errors, inconsistencies and missing values by the RMI staff. Misleading data are corrected and gaps are filled by estimations. This quality control tasks require a lot of human intervention. With the forthcoming deployment of low-cost weather stations and the subsequent increase in the volume of data to verify, the process of data quality control and completion should become as automated as much as possible.

In this work, the quality control process is fully automated by using mathematical tools. We present low-rank matrix completion methods (LRMC) that we used to solve the problem of completing missing data in daily minimum and maximum temperature series. We used a machine learning technique called Monte Carlo cross-validation to train our algorithms and then test them in a real case.

Among the matrix completion methods, some are regularised by graphs. In our case, it is then possible to represent the spatial and temporal component via graphs. By manipulating the construction of these graphs, we hope to improve the completion results. We were then able to compare our methods with what is done in the state of the art, such as the inverse distance weighting (IDW) method.

All our experiments were performed with a dataset provided by the RMI, including daily minimum and maximum temperature measurements from 100 stations over the period 2005-2019.

How to cite: Loucheur, B., Absil, P.-A., and Journée, M.: Gap filling in air temperature series by matrix completion methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2012, https://doi.org/10.5194/egusphere-egu22-2012, 2022.

EGU22-2248 | Presentations | ITS2.7/AS5.2

Exploring flooding mechanisms and their trends in Europe through explainable AI 

Shijie Jiang, Yi Zheng, and Jakob Zscheischler

Understanding the mechanisms causing river flooding and their trends is important to interpret past flood changes and make better predictions of future flood conditions. However,  there is still a lack of quantitative assessment of trends in flooding mechanisms based on observations. Recent years have witnessed the increasing prevalence of machine learning in hydrological modeling and its predictive power has been demonstrated in numerous studies. Machine learning makes hydrological predictions by recognizing generalizable relationships between inputs and outputs, which, if properly interpreted, may provide us further scientific insights into hydrological processes. In this study, we propose a new method using interpretive machine learning to identify flooding mechanisms based on the predictive relationship between precipitation and temperature and flow peaks. Applying this method to more than a thousand catchments in Europe reveals three primary input-output patterns within flow predictions, which can be associated with three catchment-wide flooding mechanisms: extreme precipitation, soil moisture excess, and snowmelt. The results indicate that approximately one-third of the studied catchments are controlled by a combination of the above mechanisms, while others are mostly dominated by one single mechanism. Although no significant shifts from one dominant mechanism to another are observed for the catchments over the past seven decades overall, some catchments with single mechanisms have become dominated by mixed mechanisms and vice versa. In particular, snowmelt-induced floods have decreased significantly in general, whereas rainfall has become more dominant in causing floods, and their effects on flooding seasonality and magnitude are crucial. ​Overall, this study provides a new perspective for understanding climatic extremes and demonstrates the prospect of artificial intelligence(AI)-assisted scientific discovery in the future.

How to cite: Jiang, S., Zheng, Y., and Zscheischler, J.: Exploring flooding mechanisms and their trends in Europe through explainable AI, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2248, https://doi.org/10.5194/egusphere-egu22-2248, 2022.

EGU22-2391 | Presentations | ITS2.7/AS5.2

Exploring cirrus cloud microphysical properties using explainable machine learning 

Kai Jeggle, David Neubauer, Gustau Camps-Valls, Hanin Binder, Michael Sprenger, and Ulrike Lohmann

Cirrus cloud microphysics and their interactions with aerosols remain one of the largest uncertainties in global climate models and climate change projections. The uncertainty originates from the high spatio-temporal variability and their non-linear dependence on meteorological drivers like temperature, updraft velocities, and aerosol environment. We combine ten years of CALIPSO/CloudSat satellite observations of cirrus clouds with ERA5 and MERRA-2 reanalysis data of meteorological and aerosol variables to create a spatial data cube. Lagrangian back trajectories are calculated for each cirrus cloud observation to add a temporal dimension to the data cube. We then train a gradient boosted tree machine learning (ML) model to predict vertically resolved cirrus cloud microphysical properties (i.e. observed ice crystal number concentration and ice water content). The explainable machine learning method of SHAP values is applied to assess the impact of individual cirrus drivers as well as combinations of drivers on cirrus cloud microphysical properties in varying meteorological conditions. In addition, we analyze how the impact of the drivers differs regionally, vertically, and temporally.

We find that the tree-based ML model is able to create a good mapping between cirrus drivers and microphysical properties (R² ~0.75) and the SHAP value analysis provides detailed insights in how different drivers impact the prediction of the microphysical cirrus cloud properties. These findings can be used to improve global climate model parameterizations of cirrus cloud formation in future works. Our approach is a good example for exploring unsolved scientific questions using explainable machine learning and feeding back insights to the domain science.

How to cite: Jeggle, K., Neubauer, D., Camps-Valls, G., Binder, H., Sprenger, M., and Lohmann, U.: Exploring cirrus cloud microphysical properties using explainable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2391, https://doi.org/10.5194/egusphere-egu22-2391, 2022.

Global circulation models (GCMs) form the basis of a vast portion of earth system research and inform our climate policy. However, our climate system is complex and connected across scales. To simulate it, we must use parameterisations. These parameterisations, which are present in all models, can have a detectable influence on the GCM outputs.

GCMs are improving, but we need to use their current output to optimally estimate the risks of extreme weather. Therefore, we must debias GCM outputs with respect to observations. Current debiasing methods cannot correct both spatial correlations and cross-variable correlations. This limitation means current methods can produce physically implausible weather events - even when the single-location, single-variable distributions match the observations. This limitation is very important for extreme event research. Compound events like heat and drought, which drastically increase wildfire risk, and spatially co-occurring events like multiple bread-basket failures, are not well corrected by these current methods.

We propose using unsupervised image-to-image translations networks to perform bias correction of GCMs. These neural network architectures are used to translate (perform bias correction) between different image domains. For example, they have been used to translate computer-generated city scenes into real-world photos, which requires spatial and cross-variable correlations to be translated. Crucially, these networks learn to translate between image domains without requiring corresponding pairs of images. Such pairs cannot be generated between climate simulations and observations due to the inherent chaos of weather.

In this work, we use these networks to bias correct historical recreation simulations from the HadGEM3-A-N216 atmosphere-only GCM with respect to the ERA5 reanalysis dataset. This GCM has a known bias in simulating the South Asian monsoon, and so we focus on this region. We show the ability of neural networks to correct this bias, and show how combining the neural network with classical techniques produces a better bias correction than either method alone. 

How to cite: Fulton, J. and Clarke, B.: Correcting biases in climate simulations using unsupervised image-to-image-translation networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2988, https://doi.org/10.5194/egusphere-egu22-2988, 2022.

EGU22-3009 | Presentations | ITS2.7/AS5.2

Application of Machine Learning for spatio-temporal mapping of the air temperature in Warsaw 

Amirhossein Hassani, Núria Castell, and Philipp Schneider

Mapping the spatio-temporal distribution of near-surface urban air temperature is crucial to our understanding of climate-sensitive epidemiology, indoor-outdoor thermal comfort, urban biodiversity, and interactive impacts of climate change and urbanity. Urban-scale decision-making in face of future climatic uncertainties requires detailed information on near-surface air temperature at high spatio-temporal resolutions. However, reaching such fine resolutions cannot be currently realised by traditional observation networks, or even by regional or global climate models (Hamdi et al. 2020). Given the complexity of the processes affecting air temperature at the urban scale to the regional scale, here we apply Machine Learning (ML) algorithms, in particular, XGBoost gradient boosting method to build predictive models of near surface air temperature (Ta at 2-meter height). These predictive models establish data-driven relations between crowd-sourced measured Ta (data produced by citizens’ sensors) and a set of spatial and spatio-temporal predictors, primarily derived from Earth Observation satellite data including Modis Aqua/Landsat 8 Land Surface Temperature (LST), Modis Terra vegetative indices, and Sentinel-2 water vapour product. We use our models to predict sub-daily (at Modis Aqua satellite passing times) variation in urban scale Ta in city of Warsaw, Poland at spatial resolution of 1 km for the months July-September and the years 2016 to 2021. A 10-fold cross-validation of the developed models shows a root mean square error between 0.97 and 1.02 °C and a coefficient of determination between 0.96 and 0.98, which are satisfactory according to the literature (Taheri-Shahraiyni and Sodoudi 2017). The resulting maps allow us to identify regions of Warsaw that are vulnerable to heat stress. The strength of the method used here is that it can be easily replicated in other EU cities to achieve high resolution maps due to the accessibility and open-sourced nature of the training and predictor data. Contingent on data availability, the predictive framework developed also can be used for monitoring and downscaling of other urban governing climatic parameters such as relative humidity in the context of future climate uncertainties.

Hamdi, R., H. Kusaka, Q.-V. Doan, P. Cai, H. He, G. Luo, W. Kuang, S. Caluwaerts, F. Duchêne, B. J. E. S. Van Schaeybroek and Environment (2020). "The state-of-the-art of urban climate change modeling and observations." 1-16.

Taheri-Shahraiyni, H. and S. J. T. S. Sodoudi (2017). "High-resolution air temperature mapping in urban areas: A review on different modelling techniques."  21(6 Part A): 2267-2286.

How to cite: Hassani, A., Castell, N., and Schneider, P.: Application of Machine Learning for spatio-temporal mapping of the air temperature in Warsaw, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3009, https://doi.org/10.5194/egusphere-egu22-3009, 2022.

The interdisciplinary research project "BayTreeNet" investigates the reactions of forest ecosystems to current climate dynamics. In the mid-latitudes, local climatic phenomena often show a strong dependence on the large-scale climate dynamics, the weather types (WT), which significantly determine the climate of a region through frequency and intensity. In the topographically diverse region of Bavaria, different WT show various weather conditions at different locations.

The meaning of every WT is explained for the different forest regions in Bavaria and the results of the climate dynamics sub-project provide the physical basis for the "BayTreeNet" project. Subsequently, climate-growth relationships are established in the dendroecology sub-project to investigate the response of forests to individual WT at different forest sites. Complementary steps allow interpretation of results for the past (20th century) and projection into the future (21st century). One hypothesis to be investigated is that forest sites in Bavaria are affected by a significant influence of climate change in the 21st century and the associated change in WT.

The automated classification of large-scale weather patterns is presented by Self-Organizing-Maps (SOM) developed by Kohonen, which enables visualization and reduction of high-dimensional data. The poster presents the evaluation and selection of an appropriate SOM-setting and its first results. Besides, it is planned to show first analyses of the environmental conditions of the different WT and how these are represented in global climate models (GCMs) in the past and future.

How to cite: Wehrmann, S. and Mölg, T.: Classifying weather types in Europe by Self-Organizing-Maps (SOM) with regard to GCM-based future projections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3105, https://doi.org/10.5194/egusphere-egu22-3105, 2022.

EGU22-3482 | Presentations | ITS2.7/AS5.2

Public perception assessment on climate change and natural disaster influence using social media big-data: A case study of USA 

SungKu Heo, Pouya Ifaei, Mohammad Moosazadeh, and ChangKyoo Yoo

Climate change is a global crisis to the world which influences the human race and society's development. Threatens of climate change have become increasingly recognized to the public and government in both environments, society, and economy across the globe; because the consequence of climate change is not only shown up as the increasing of global temperature, also expressed in an intensive natural hazard, such as floods, droughts, wildfires, and hurricanes. For the sustainability development in the globe, it is crucial to provide a response to mitigating climate change through the government’s policy and decision-making; however, the public's engagement in the actions towards the critical environmental crisis still needs to be largely promoted. Analyzing the relationship between the public awareness of climate change and natural disasters is an essential aspect in climate change mitigation and policymaking. In this study, based on the abundance of the text message in social media, especially Twitter, the public understanding and discussions upon climate change from the surrounding environment was recognized and analyzed through the human as the sensor which receiving information of climate change. Twitter content analysis and filed data impact analysis were conducted; text mining algorithms are implemented in the Twitter big-data information to find the similarity based on a cosine similarity score (CSS) between the climate change corpus and the natural events corpora. Then, the factors of nature disaster influence were predicted utilizing a multiple linear regression model and climate change tweets dataset. This research shows that the public is more pretend to link the natural events with climate change when they tweeting when serious natural disasters happened. The developed regression model indicated that natural events caused by the consequence of climate change influenced the people’s social media activity through messages on Twitter with having the awareness of climate change. From this study, the results indicated that the public experience of natural events including intensive disasters can lead them to link the climate change with the natural events easily; when compared with the people who rarely experience natural events.

Acknowledgment

This research was supported by the project (NRF-2021R1A2C2007838) through the National Research Foundation of Korea (NRF) and the Korea Ministry of Environment (MOE) as Graduate school specialized in Climate Change.

How to cite: Heo, S., Ifaei, P., Moosazadeh, M., and Yoo, C.: Public perception assessment on climate change and natural disaster influence using social media big-data: A case study of USA, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3482, https://doi.org/10.5194/egusphere-egu22-3482, 2022.

EGU22-4431 | Presentations | ITS2.7/AS5.2

Identification of Global Drivers of Indian Summer Monsoon using Causal Inference and Interpretable AI 

Deepayan Chakraborty, Adway Mitra, Bhupendranath Goswami, and Pv Rajesh

Indian Summer Monsoon Rainfall (ISMR) is a complex phenomenon that depends on several climatic phenomena at different parts of the word through teleconnections. Each season is characterized by extended periods of wet and dry spells (which may cause floods or droughts) which contribute to intra-seasonal variability. Tropical and extra-tropical drivers jointly influence the intra-seasonal variability. Although El Nino and Southern Oscillation (ENSO) is known to be a driver of ISMR, researchers have also found its relation with Indian Ocean Dipole (IOD), North Atlantic Oscillations (NAO), Atlantic Multi-decadal Oscillation (AMO). In this work, we use ideas from Causality Theory and Explainable Machine Learning to quantify the influence of different climatic phenomena on the intraseasonal variation of ISMR.

To identify such causal relations, we applied two statistically sound causal inference approaches, i.e., PCMCI+ Algorithm (Conditional Independence based) and Granger Causal test (Regression-based).  For the Granger causality test, we have examined separately for both linear and non-linear regression. In case of PCMCI+, conditional independence tests were used between pairs of variables at different "lag periods". It is worth pointing out that, till now “causality” is not properly quantified in the Climate Science community and only linear correlations are used as a basis to identify relationships like ENSO-ISMR and AMO-ISMR. We performed experiments on mean monthly rainfall anomaly data (during the monsoon months of June-September over India) along with six probable drivers (ENSO, AMO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Nino, and Indian Ocean Dipole) for May, June, July, August, September months during the period 1861-2016. While the two approaches produced some contradictions, they also produced a common conclusion that ENSO and AMO are equally important and independent drivers of ISMR. 

Additionally, we have studied the contribution of the drivers on annual extremes of ISMR (years of deficient and excess rainfall) using Shapley values based on the concept of Game Theory to quantify the contributions of different predictors in a model. In this work, we train a XGBoost model to predict the ISMR anomaly from any values of the predictor variables. The experiment is carried out in two approaches. One approach involves analyzing the contribution of each driver for each of the ISMR months of any year on the mean seasonal rainfall anomaly of that year. Another approach focuses on the contribution of the seasonal mean value of each driver on the same. In both approaches, we analyze the distribution of each driver’s Shapley values for excess and deficient monsoon years for contrast. We find that while ENSO is indeed the dominant driving factor for a majority of these years, AMO is another major factor which frequently contributes to such deficiencies, while Atlantic Nino and Indian Ocean Dipole too sometimes contribute. On the other hand, Indian Ocean Dipole seems to be a major contributor for several years of excess rainfall. As future work, we plan to carry out a robustness analysis of these results, and also examine the drivers of regional extremes.

How to cite: Chakraborty, D., Mitra, A., Goswami, B., and Rajesh, P.: Identification of Global Drivers of Indian Summer Monsoon using Causal Inference and Interpretable AI, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4431, https://doi.org/10.5194/egusphere-egu22-4431, 2022.

EGU22-4534 | Presentations | ITS2.7/AS5.2

Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean 

Joana Roussillon, Jean Littaye, Ronan Fablet, Lucas Drumetz, Thomas Gorgues, and Elodie Martinez

Phytoplankton plays a key role in the carbon cycle and fuels marine food webs. Its seasonal and interannual variations are relatively well-known at global scale thanks to satellite ocean color observations that have been continuously acquired since 1997. However, the satellite-derived chlorophyll-a concentrations (Chl-a, a proxy of phytoplankton biomass) time series are still too short to investigate phytoplankton biomass low-frequency variability. Machine learning models such as support vector regression (SVR) or multi-layer perceptron (MLP) have recently proven to be an alternative approach to mechanistic ones to reconstruct Chl-a past signals (including periods before the satellite era) from physical predictors, but they remain unsatisfactory. In particular, the relationships between phytoplankton and its physical surrounding environment are not homogeneous in space, and training such models over the entire globe does not allow them to capture these regional specificities. Moreover, if the global ocean is commonly partitioned into biogeochemical provinces into which phytoplankton growth is supposed to be governed by similar processes, their time-evolving nature makes it difficult to impose a priori spatial constraints to restrict the learning phase on specific areas. Here, we propose to overcome this limitation by introducing spatial multi-modalities into a convolutional neural network (CNN). The latter can learn with no particular supervision several spatially weighted modes of variability. Each of them is associated with a CNN submodel trained in parallel, standing for a mode-specific response of phytoplankton biomass to the physical forcing. Beyond improving performance reconstruction, we will show that the learned spatial modes appear physically consistent and may help to get new insights into physical-biogeochemical processes controlling phytoplankton repartition at global scale.

How to cite: Roussillon, J., Littaye, J., Fablet, R., Drumetz, L., Gorgues, T., and Martinez, E.: Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4534, https://doi.org/10.5194/egusphere-egu22-4534, 2022.

EGU22-4584 | Presentations | ITS2.7/AS5.2

Super-Resolution based Deep Downscaling of Precipitation 

Sumanta Chandra Mishra Sharma and Adway Mitra

Downscaling is widely used to improve spatial resolution of meteorological variables. Broadly there are two classes of techniques used for downscaling i.e. dynamical downscaling and statistical downscaling. Dynamical downscaling depends on the boundary conditions of coarse resolution global models like General Circulation Models (GCMs) for its operation whereas the statistical model tries to interpret the statistical relationship between the high-resolution and low-resolution data (Kumar et. al. 2021). With the rapid development of deep learning techniques in recent years, deep learning based super-resolution (SR) models have been designed for image processing and computer vision, for increasing the resolution of a given image. But many researchers from other fields have also adapted these techniques and achieved state-of-the-art performance in various domains. To the best of our knowledge, only a few works exist that have used the super-resolution methods in climate domain, for deep downscaling of precipitation data.

These super-resolution approaches mostly use convolutional neural networks (CNN) to accomplish their task. In CNN when we increase the depth of the model then there is a chance of information loss and error propagation (Vandal et.al.2017). To reduce this information loss, we have introduced residual-based deep downscaling models. These models have multiple residual blocks and skip connections between similar types of convolutional layers. The long skip connections in the model helps to reduce information loss in the network. These models take as input, data that is pre-upsampled by linear interpolation, and then improve the estimates of the pixel values.

In our experiments, we have focused on downscaling of rainfall over Indian landmass (for Indian summer monsoon rainfall) and for a region in the USA spanning the southeast CONUS and parts of its neighboring states that are present between the longitude 700 W to 1000 W and latitude 240 N to 400 N. The precipitation data for this task is collected from the India Meteorological Department (IMD), Pune, India, and NOAA Physical Science Laboratory. We have examined our model's predictive behavior and compared it with the existing super-resolution models like SRCNN and DeepSD, which have been earlier used for precipitation downscaling. In the DeepSD model, we have used the GTOPO30 land elevation data provided by USGS along with the precipitation data as input. All these models are trained and tested in both the geographical regions separately and it is found that the proposed model performs better than the existing models on multiple accuracy measures like PSNR, Correlation Coefficient, etc. for the specific region and scaling factor.

How to cite: Mishra Sharma, S. C. and Mitra, A.: Super-Resolution based Deep Downscaling of Precipitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4584, https://doi.org/10.5194/egusphere-egu22-4584, 2022.

EGU22-4853 | Presentations | ITS2.7/AS5.2

Can cloud properties provide information on surface wind variations using deep learning? 

Sebastiaan Jamaer, Jérôme Neirynck, and Nicole van Lipzig

Recent studies have shown that the increasing sizes of offshore wind farms can cause a reduced energy production through mesoscale interactions with the atmosphere. Therefore, accurate nowcasting of the energy yields of large offshore wind farms depend on accurate predictions of the large synoptic weather systems as well as accurate predictions of the smaller mesoscale weather systems. In general, global or regional forecasting models are very well suited to predict synoptic-scale weather systems. However, satellite or radar data can support the nowcasting of shorter, smaller-scale systems. 

In this work, a first step towards nowcasting of the mesoscale wind using satellite images has been taken, namely the coupling of the mesoscale wind component to cloud properties that are available from satellite images using a deep learning framework. To achieve this, a high-resolution regional atmospheric model (COSMO-CLM) was used to generate one year of high resolution cloud en hub-height wind data. From this wind data the mesoscale component was filtered out and used as target images for the deep learning model. The input images of the model were several cloud-related fields from the atmospheric model. The model itself was a Deep Convolutional Neural Network (a U-Net) which was trained to minimize the mean squared error. 

This analysis indicates that cloud information can be used to extract information about the mesoscale weather systems and could be used for nowcasting by using the trained U-Net as a basis for a temporal deep learning model. However, future validation with real-world data is still needed to determine the added value of such an approach.

How to cite: Jamaer, S., Neirynck, J., and van Lipzig, N.: Can cloud properties provide information on surface wind variations using deep learning?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4853, https://doi.org/10.5194/egusphere-egu22-4853, 2022.

EGU22-5058 | Presentations | ITS2.7/AS5.2

Can satellite images provide supervision for cloud systems characterization? 

Dwaipayan Chatterjee, Hartwig Deneke, and Susanne Crewell

With ever-increasing resolution, geostationary satellites are able to reveal the complex structure and organization of clouds. How cloud systems organize is important for the local climate and strongly connects to the Earth's response to warming through cloud system feedback.

Motivated by recent developments in computer vision for pattern analysis of uncurated images, our work aims to understand the organization of cloud systems based on high-resolution cloud optical depth images. We are exploiting the self-learning capability of a deep neural network to classify satellite images into different subgroups based on the distribution pattern of the cloud systems.

Unlike most studies, our neural network is trained over the central European domain, which is characterized by strong land surface type and topography variations. The satellite data is post-processed and retrieved at a higher spatio-temporal resolution (2 km, 5 min), enhanced by 66% compared to the current standard, equivalent to the future Meteosat third-generation satellite, which will be launched soon.

We show how recent advances in deep learning networks are used to understand clouds' physical properties in temporal and spatial scales. In a purely data-driven approach, we avoid the noise and bias obtained from human labeling, and with proper scalable techniques, it takes 0.86 ms and 2.13 ms to label an image at two different spatial configurations. We demonstrate explainable artificial intelligence (XAI), which helps gain trust for the neural network's performance.

To generalize the results, a thorough quantified evaluation is done on two spatial domains and two-pixel configurations (128x128, 64x64). We examine the uncertainty associated with distinct machine-detected cloud-pattern categories. For this, the learned features of the satellite images are extracted from the trained neural network and fed to an independent hierarchical - agglomerative algorithm. Therefore the work also explores the uncertainties associated with the automatic machine-detected patterns and how they vary with different cloud classification types.

How to cite: Chatterjee, D., Deneke, H., and Crewell, S.: Can satellite images provide supervision for cloud systems characterization?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5058, https://doi.org/10.5194/egusphere-egu22-5058, 2022.

Extreme weather events, such as droughts, floods or heatwaves, severely impact agricultural yield. However, crop yield failure may also be caused by the temporal or multivariate compounding of more moderate weather events. An example of such an occurrence is the phenomenon of 'false spring', where the combined effects of a warm interval in late winter followed by a period of freezing temperatures can result in severe damage to vegetation. Alternatively, multiple weather events may impact crops simultaneously, as with compound hot and dry weather conditions.

Machine learning techniques are able to learn highly complex and nonlinear relationships between predictors. Such methods have previously been used to explore the influence of monthly- or seasonally-aggregated weather data as well as predefined extreme event indicators on crop yield. However, as crop yield may be impacted by climatic variables at different temporal scales, interpretable machine learning methods that can extract relevant meteorological features from higher-resolution time series data are desirable.

In this study we test the ability of adaptations of random forest models to identify compound meteorological drivers of crop failure from simulated data. In particular, adaptations of random forest models capable of ingesting daily multivariate time series data and spatial information are used. First, we train models to extract useful features from daily climatic data and predict crop yield failure probabilities. Second, we use permutation feature importances and sequential feature selection to investigate weather events and time periods identified by the models as most relevant for crop yield failure prediction. Finally, we explore the interactions learned by the models between these selected meteorological drivers, and compare the outcomes for several global crop models. Ultimately, our goal is to present a robust and highly interpretable machine learning method that can identify critical weather conditions from datasets with high temporal and spatial resolution, and is therefore able to identify drivers of crop failure using relatively few years of data.

How to cite: Sweet, L. and Zscheischler, J.: Using interpretable machine learning to identify compound meteorological drivers of crop yield failure, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5464, https://doi.org/10.5194/egusphere-egu22-5464, 2022.

EGU22-5756 | Presentations | ITS2.7/AS5.2

The influence of meteorological parameters on wind speed extreme events:  A causal inference approach 

Katerina Hlavackova-Schindler (Schindlerova), Andreas Fuchs, Claudia Plant, Irene Schicker, and Rosmarie DeWit

Based on the ERA5  data of hourly  meteorological parameters [1], we investigate temporal effects of  12 meteorological parameters on  the extreme values occurring in  wind speed.  We approach the problem by using the Granger causal inference, namely by the heterogeneous graphical Granger model (HGGM) [2]. In contrary to the classical Granger model proposed for causal inference among Gaussian processes, the HGGM detects causal relations among time series with distributions from the exponential family, which includes a wider class of common distributions. In previous synthetic experiments, HGGM combined with the genetic algorithm search based on the minimum message length principle has been shown superior in precision over the baseline causal methods [2].  We investigate various experimental settings of all 12 parameters with respect to the wind extremes in various time intervals. Moreover, we compare the influence of various data preprocessing methods and evaluate the interpretability of the discovered causal connections based on meteorological knowledge.

[1] https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels?tab=overview

[2] Behzadi, S, Hlaváčková-Schindler, K., Plant, C. (2019) Granger causality for heterogeneous processes, In: Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, pp. 463-475.

[3] Hlaváčková-Schindler, K., Plant, C. (2020) Heterogeneous graphical Granger causality by minimum message length, Entropy, 22(1400). pp. 1-21 ISSN 1099-4300 MDPI (2020).

How to cite: Hlavackova-Schindler (Schindlerova), K., Fuchs, A., Plant, C., Schicker, I., and DeWit, R.: The influence of meteorological parameters on wind speed extreme events:  A causal inference approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5756, https://doi.org/10.5194/egusphere-egu22-5756, 2022.

EGU22-6093 | Presentations | ITS2.7/AS5.2

Machine learning to quantify cloud responses to aerosols from satellite data 

Jessenia Gonzalez, Odran Sourdeval, Gustau Camps-Valls, and Johannes Quaas

The Earth's radiation budget may be altered by changes in atmospheric composition or land use. This is called radiative forcing. Among the human-generated influences in radiative forcing, aerosol-cloud interactions are the least understood. A way to quantify a key uncertainty in this regard, the adjustment of cloud liquid water path (LWP), is by the ratio (sensitivity) of LWP to changes in cloud droplet number concentration (Nd). A key problem in quantifying this sensitivity from large-scale observations is that these two quantities are not retrieved by operational satellite products and are subject to large uncertainties. 

In this work, we use machine learning techniques to show that inferring LWP and Nd directly from satellite observation data may yield a better understanding of this relationship without using retrievals, which may lead to large and systematic uncertainties. In particular, we use supervised learning on the basis of available high-resolution ICON-LEM (ICOsahedral Non-hydrostatic Large Eddy Model) simulations from the HD(CP)² project (High Definition Clouds and Precipitation for advancing Climate Prediction) and forward-simulated radiances obtained from the radiative transfer modeling (RTTOV, Radiative Transfer for TOVS) which uses MODIS (Moderate Resolution Imaging Spectroradiometer) data as a reference. Usually, only two channels from the reflectance of MODIS can be used to estimate the LWP and Nd. However, having access to 36 bands allows us to exploit data and find other patterns to get these parameters directly from the observation space rather than from the retrievals. A machine learning model is used to create an emulator which approximates the Radiative Transfer Model, and another machine learning model to directly predict the sensitivity of LWP - Nd from the satellite observation data.

How to cite: Gonzalez, J., Sourdeval, O., Camps-Valls, G., and Quaas, J.: Machine learning to quantify cloud responses to aerosols from satellite data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6093, https://doi.org/10.5194/egusphere-egu22-6093, 2022.

Microclimate is a relatively recent concept in atmospheric sciences, which started drawing attention of engineers and climatologists after proliferation of the open thermal (infrared, middle- and near-infrared) remote sensing instruments and high-resolution emissivity datasets. Rarely mentioned in the context of global climate change reversing, efficient management of microclimates nevertheless can be considered as a possible solution. Their function is bi-directional; On one hand, they can perform as ‘buffers’ by smoothing out effects of the already altered global climate on people and ecosystems, whilst also acting as the structural contributors to perturbations in the higher layers of the atmosphere. 

In the most abstract terms, microclimates tend to manifest themselves via land surface temperature conditions, which in turn are highly sensitive to the underlying land cover and use decisions. Forests are considered as the most efficient terrestrial carbon sinks and climate regulators, and various forms, configurations and continuity of logging can substantially alter the patterns of local temperature fluxes, precipitation and ecosystems. In this study we propose a novel heteroskedastic machine learning method, which can attribute localised forest loss patches due to industrial mining activity and estimate the resulting change in dynamics of the surrounding microclimate(s). 

How to cite: Tkachenko, N. and Garcia Velez, L.: Global attribution of microclimate dynamics to industrial deforestation sites using thermal remote sensing and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6466, https://doi.org/10.5194/egusphere-egu22-6466, 2022.

EGU22-6543 | Presentations | ITS2.7/AS5.2

High-resolution hybrid spatiotemporal modeling of daily relative humidity across Germany for epidemiological research: a Random Forest approach 

Nikolaos Nikolaou, Laurens Bouwer, Mahyar Valizadeh, Marco Dallavalle, Kathrin Wolf, Massimo Stafoggia, Annette Peters, and Alexandra Schneider

Introduction: Relative humidity (RH) is a meteorological variable of great importance as it affects other climatic variables and plays a role in plant and animal life as well as in human comfort and well-being. However, the commonly used weather station observations are inefficient to represent the great spatiotemporal RH variability, leading to exposure misclassification and difficulties to assess local RH health effects. There is also a lack of high-resolution RH spatial datasets and no readily available methods for modeling humidity across space and time. To tackle these issues, we aimed to improve the spatiotemporal coverage of RH data in Germany, using remote sensing and machine learning (ML) modeling.

Methods: In this study, we estimated German-wide daily mean RH at 1km2 resolution over the period 2000-2020. We used several predictors from multiple sources, including DWD RH observations, Ta predictions as well as satellite-derived DEM, NDVI and the True Color band composition (bands 1, 4 and 3: red, green and blue). Our main predictor for estimating the daily mean RH was the daily mean Ta. We had already mapped daily mean Ta in 1km2 across Germany through a regression-based hybrid approach of two linear mixed models using land surface temperature. Additionally, a very important predictor was the date, capturing the day-to-day variation of the response-explanatory variables relationship. All these variables were included in a Random Forest (RF) model, applied for each year separately. We assessed the model’s accuracy via 10-fold cross-validation (CV). First internally, using station observations that were not used for the model training, and then externally in the Augsburg metropolitan area using the REKLIM monitoring network over the period 2015-2019.

Results: Regarding the internal validation, the 21-year overall mean CV-R2 was 0.76 and the CV-RMSE was 6.084%. For the model’s external performance, at the same day, we found CV-R2=0.75 and CV-RMSE=7.051% and for the 7-day average, CV-R2=0.81 and CV-RMSE=5.420%. Germany is characterized by high relative humidity values, having a 20-year average RH of 78.4%. Even if the annual country-wide averages were quite stable, ranging from 81.2% for 2001 to 75.3% for 2020, the spatial variability exceeded 15% annually on average. Generally, winter was the most humid period and especially December was the most humid month. Extended urban cores (e.g., from Stuttgart to Frankfurt) or individual cities as Munich were less humid than the surrounding rural areas. There are also specific spatial patterns for RH distribution, including mountains, rivers and coastlines. For instance, the Alps and the North Sea coast are areas with elevated RH.

Conclusion: Our results indicate that the applied hybrid RF model is suitable for estimating nationwide RH at high spatiotemporal resolution, achieving a strong performance with low errors. Our method contributes to an improved spatial estimation of RH and the output product will help us understand better the spatiotemporal patterns of RH in Germany. We also plan to apply other ML techniques and compare the findings. Finally, our dataset will be used for epidemiological analyses, but could also be used for other research questions.

How to cite: Nikolaou, N., Bouwer, L., Valizadeh, M., Dallavalle, M., Wolf, K., Stafoggia, M., Peters, A., and Schneider, A.: High-resolution hybrid spatiotemporal modeling of daily relative humidity across Germany for epidemiological research: a Random Forest approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6543, https://doi.org/10.5194/egusphere-egu22-6543, 2022.

EGU22-6958 | Presentations | ITS2.7/AS5.2

Causal Discovery in Ensembles of Climate Time Series 

Andreas Gerhardus and Jakob Runge

Understanding the cause and effect relationships that govern natural phenomena is central to the scientific inquiry. While being the gold standard for inferring causal relationships, there are many scenarios in which controlled experiments are not possible. This is for example the case for most aspects of Earth's complex climate system. Causal relationships then have to be learned from statistical dependencies in observational data, a task that is commonly referred to as (observational) causal discovery.

When applied to time series data for learning causal relationships in dynamical systems, methods for causal discovery face additional statistical challenges. This is so because, as licensed by an assumption of stationarity, samples are taken in a sliding window fashion and hence autocorrelated rather than iid. Moreover, strong autocorrelations also often occlude other relevant causal links. The recent PCMCI algorithm (Runge et al., 2019) and its variants PCMCI+ (Runge, 2020) and LPCMCI (Gerhardus and Runge, 2020) address and to some extent alleviate theses issues.

In this contribution we present the Ensemble-PCMCI method, an adaption of PCMCI (and its variants PCMCI+ and LPCMCI) to cases in which the data comprises several time series, i.e., measurements of several instances of the same underlying dynamical system. Samples can then be taken from these different time series instead of a in a sliding window fashion, thus avoiding the issue of autocorrelation and also allowing to relax the stationarity assumption. In particular, this opens the possibility to analyze temporal changes in the underlying causal mechanisms. A potential domain of application are ensemble forecasts.

Related references:
Jakob Runge et al. (2019). Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances 5 eaau4996.

Jakob Runge (2020). Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI). Proceedings of Machine Learning Research 124 1388–1397. PMLR.

Andreas Gerhardus and Jakob Runge (2020). High-recall causal discovery for autocorrelated time series with latent confounders. In Advances in Neural Information Processing Systems 33 12615–12625. Curran Associates, Inc.

How to cite: Gerhardus, A. and Runge, J.: Causal Discovery in Ensembles of Climate Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6958, https://doi.org/10.5194/egusphere-egu22-6958, 2022.

EGU22-6998 | Presentations | ITS2.7/AS5.2

Inferring the Cloud Vertical Distribution from Geostationary Satellite Data 

Sarah Brüning, Holger Tost, and Stefan Niebler

Clouds and their radiative feedback mechanisms are of vital importance for the atmospheric cycle of the Earth regarding global weather today as well as climate changes in the future. Climate models and simulations are sensitive to the vertical distribution of clouds, emphasizing the need for broadly accessible fine resolution data. Although passive satellite sensors provide continuous cloud monitoring on a global scale, they miss the ability to infer physical properties below the cloud top. Active instruments like radar are particularly suitable for this task but lack an adequate spatio-temporal resolution. Here, recent advances in Deep-Learning models open up the possibility to transfer spatial information from a 2D towards a 3D perspective on a large-scale.

By an example period in 2017, this study aims to explore the feasibility and potential of neural networks to reconstruct the vertical distribution of volumetric radar data along a cloud’s column. For this purpose, the network has been tested on the Full Disk domain of a geostationary satellite with high spatio-temporal resolution data. Using raw satellite channels, spectral indices, and topographic data, we infer the 3D radar reflectivity from these physical predictors. First results demonstrate the network’s capability to reconstruct the cloud vertical distribution. Finally, the ultimate goal of interpolating the cloud column for the whole domain is supported by a considerably high accuracy in predicting the radar reflectivity. The resulting product can open up the opportunity to enhance climate models by an increased spatio-temporal resolution of 3D cloud structures.

How to cite: Brüning, S., Tost, H., and Niebler, S.: Inferring the Cloud Vertical Distribution from Geostationary Satellite Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6998, https://doi.org/10.5194/egusphere-egu22-6998, 2022.

EGU22-7011 | Presentations | ITS2.7/AS5.2

Unlocking the potential of ML for Earth and Environment researchers 

Tobias Weigel, Frauke Albrecht, Caroline Arnold, Danu Caus, Harsh Grover, and Andrey Vlasenko

This presentation reports on support done under the aegis of Helmholtz AI for a wide range of machine learning based solutions for research questions related to Earth and Environmental sciences. We will give insight into typical problem statements from Earth observation and Earth system modeling that are good candidates for experimentation with ML methods and report on our accumulated experience tackling such challenges with individual support projects. We address these projects in an agile, iterative manner and during the definition phase, we direct special attention towards assembling practically meaningful demonstrators within a couple of months. A recent focus of our work lies on tackling software engineering concerns for building ML-ESM hybrids.

Our implementation workflow covers stages from data exploration to model tuning. A project may often start with evaluating available data and deciding on basic feasibility, apparent limitations such as biases or a lack of labels, and splitting into training and test data. Setting up a data processing workflow to subselect and compile training data is often the next step, followed by setting up a model architecture. We have made good experience with automatic tooling to tune hyperparameters and test and optimize network architectures. In typical implementation projects, these stages may repeat many times to improve results and cover aspects such as errors due to confusing samples, incorporating domain model knowledge, testing alternative architectures and ML approaches, and dealing with memory limitations and performance optimization.

Over the past two years, we have supported Helmholtz-based researchers from many subdisciplines on making the best use of ML methods along with these steps. Example projects include wind speed regression on GNSS-R data, emulation of atmospheric chemistry modeling, Earth System model parameterizations with ML, marine litter detection, and rogue waves prediction. The poster presentation will highlight selected best practices across these projects. We are happy to share our experience as it may prove useful to applications in wider Earth System modeling. If you are interested in discussing your challenge with us, please feel free to chat with us.

How to cite: Weigel, T., Albrecht, F., Arnold, C., Caus, D., Grover, H., and Vlasenko, A.: Unlocking the potential of ML for Earth and Environment researchers, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7011, https://doi.org/10.5194/egusphere-egu22-7011, 2022.

EGU22-7034 | Presentations | ITS2.7/AS5.2

Developing a new emergent constraint through network analysis 

Lucile Ricard, Athanasios Nenes, Jakob Runge, and Fabrizio Falasca

Climate sensitivity expresses how average global temperature responds to an increase in greenhouse gas concentration. It is a key metric to assess climate change, and to formulate policy decisions, but its estimation from the Earth System Models (ESM) provides a wide range: between 2.5 and 4.0 K based on the sixth assessment report (AR6) of the Intergovernmental Panel on Climate Change (IPCC). To narrow down this spread, a number of observable metrics, called “emergent constraints” have been proposed, but often are based on relatively few parameters from a simulation – thought to express the “essence” of the climate simulation and its relationship with climate sensitivity. Many of the constraints to date however are model-dependent, therefore questionable in terms of their robustness.

We postulate that methods based on “holistic” consideration of the simulations and observations may provide more robust constraints; we also focus on Sea Surface Temperature (SST) ensembles as SST is a major driver of climate variability. To extract the essential patterns of SST variability, we use a knowledge discovery and network inference method, δ-Maps (Fountalis et al., 2016, Falasca et al, 2019), expanded to include a causal discovery algorithm (PCMCI) that relies on conditional independence testing, to capture the essential dynamics of the climate simulation on a functional graph and explore the true causal effects of the underlying dynamical system (Runge et al., 2019). The resulting networks are then quantitatively compared using network “metrics” that capture different aspects, including the regions of uniform behavior, how they alternate over time and the strength of association. These metrics are then compared between simulations, and observations and used as emergent constraints, called Causal Model Evaluation (CME).

We apply δ-Maps and CME to CMIP6 model SST outputs and demonstrate how the networks and related metrics can be used to assess the historical performance of CMIP models, and climate sensitivity. We start by comparing the CMIP6 simulations against CMIP5 models, by using the reanalysis dataset HadISST (Met Office Hadley Centre) as a proxy for observations. Each field is reduced to a network and then how similar they are with reanalysis SST. The CMIP6 historical networks are then compared against CMIP6 projected networks, build from the Shared Socio-Economic Pathway ssp245 (“Middle of the road”) scenario. Comparing past and future SST networks help us to evaluate the extent to which climate warming is encompassed in the change overlying dynamical system of our networks. A large distance from network build over the past period to network build over a future scenario could be tightly related to a large temperature response to an increase of greenhouse gas emission, that is the way we define climate sensitivity. We finally give a new estimation of the climate sensitivity with a weighting scheme approach, derived from a combination of its performance metrics.

How to cite: Ricard, L., Nenes, A., Runge, J., and Falasca, F.: Developing a new emergent constraint through network analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7034, https://doi.org/10.5194/egusphere-egu22-7034, 2022.

EGU22-7355 | Presentations | ITS2.7/AS5.2

Combining cloud properties and synoptic observations to predict cloud base height using Machine Learning 

Julien Lenhardt, Johannes Quaas, and Dino Sejdinovic

Cloud base height (CBH) is an important geometric parameter of a cloud and shapes its radiative properties. The CBH is also further of practical interest in the aviation community regarding pilot visibility and aircraft icing hazards. While the cloud-top height has been successfully derived from passive imaging radiometers on satellites during recent years, the derivation of the CBH remains a more difficult challenge with these same retrievals.

In our study we combine surface observations and passive satellite remote-sensing retrievals to create a database of CBH labels and cloud properties to ultimately train a machine learning model predicting CBH. The labels come from the global marine meteorological observations dataset (UK Met Office, 2006) which consists of near-global synoptic observations made on sea. This data set provides information about CBH, cloud type, cloud cover and other meteorological surface quantities with CBH being the main interest here. The features based upon which the machine learning model is trained consist in different cloud-top and cloud optical properties (Level 2 products MOD06/MYD06 from the MODIS sensor) extracted on a 127km x 127km grid around the synoptic observation point. To study the large diversity in cloud scenes, an auto-encoder architecture is chosen. The regression task is then carried out in the modelled latent space which is output by the encoder part of the model. To account for the spatial relationships in our input data the model architecture is based on Convolutional Neural Networks. We define a study domain in the Atlantic ocean, around the equator. The combination of information from below and over the cloud could allow us to build a robust model to predict CBH and then extend predictions to regions where surface measurements are not available.

How to cite: Lenhardt, J., Quaas, J., and Sejdinovic, D.: Combining cloud properties and synoptic observations to predict cloud base height using Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7355, https://doi.org/10.5194/egusphere-egu22-7355, 2022.

EGU22-8068 | Presentations | ITS2.7/AS5.2

Generative Adversarial Modeling of Tropical Precipitation and the Intertropical Convergence Zone 

Cody Nash, Balasubramanya Nadiga, and Xiaoming Sun

In this study we evaluate the use of generative adversarial networks (GANs) to model satellite-based estimates of precipitation conditioned on reanalysis temperature, humidity, wind, and surface latent heat flux.  We are interested in the climatology of precipitation and modeling it in terms of atmospheric state variables, in contrast to a weather forecast or precipitation nowcast perspective.  We consider a hierarchy of models in terms of complexity, including simple baselines, generalized linear models, gradient boosted decision trees, pointwise GANs and deep convolutional GANs. To gain further insight into the models we apply methods for analyzing machine learning models, including model explainability, ablation studies, and a diverse set of metrics for pointwise and distributional differences, including information theory based metrics.  We find that generative models significantly outperform baseline models on metrics based on the distribution of predictions, particularly in capturing the extremes of the distributions.  Overall, a deep convolutional model achieves the highest accuracy.  We also find that the relative importance of atmospheric variables and of their interactions vary considerably among the different models considered. 

How to cite: Nash, C., Nadiga, B., and Sun, X.: Generative Adversarial Modeling of Tropical Precipitation and the Intertropical Convergence Zone, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8068, https://doi.org/10.5194/egusphere-egu22-8068, 2022.

EGU22-8130 | Presentations | ITS2.7/AS5.2

A comparison of explainable AI solutions to a climate change prediction task 

Philine Lou Bommer, Marlene Kretschmer, Dilyara Bareeva, Kadircan Aksoy, and Marina Höhne

In climate change research we are dealing with a chaotic system, usually leading to huge computational efforts in order to make faithful predictions. Deep neural networks (DNNs) offer promising new approaches due to their computational efficiency and universal solution properties. However, despite the increase in successful application cases with DNNs, the black-box nature of such purely data-driven approaches limits their trustworthiness and therefore the useability of deep learning in the context of climate science.

The field of explainable artificial intelligence (XAI) has been established to enable a deeper understanding of the complex, highly-nonlinear methods and their predictions. By shedding light onto the reasons behind the predictions made by DNNs, XAI methods can serve as a support for researchers to reveal the underlying physical mechanisms and properties inherent in the studied data. Some XAI methods have already been successfully applied to climate science, however, no detailed comparison of their performances is available. As the number of XAI methods on the one hand, and DNN applications on the other hand are growing, a comprehensive evaluation is necessary in order to understand the different XAI methods in the climate context.

In this work we provide an overview of different available XAI methods and their potential applications for climate science. Based on a previously published climate change prediction task, we compare several explanation approaches, including model-aware (e.g. Saliency, IntGrad, LRP) and model-agnostic methods (e.g. SHAP). We analyse their ability to verify the physical soundness of the DNN predictions as well as their ability to uncover new insights into the underlying climate phenomena. Another important aspect we address in our work is the possibility to assess the underlying uncertainties of DNN predictions using XAI methods. This is especially crucial in climate science applications where uncertainty due to natural variability is usually large. To this end, we investigate the potential of two recently introduced XAI methods —UAI+ and NoiseGrad, which have been designed to include uncertainty information of the predictions into the explanations. We demonstrate that those XAI methods enable more stable explanations with respect to model noise and can further deal with uncertainties of network information. We argue that these methods are therefore particularly suitable for climate science application cases.

How to cite: Bommer, P. L., Kretschmer, M., Bareeva, D., Aksoy, K., and Höhne, M.: A comparison of explainable AI solutions to a climate change prediction task, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8130, https://doi.org/10.5194/egusphere-egu22-8130, 2022.

Despite the importance of the Atlantic Meridional Overturning Circulation (AMOC) to the climate on decadal and multidecadal timescales, Earth System Models (ESM) exhibit large differences in their estimation of the amplitude and spectrum of its variability. In addition, observational data is sparse and before the onset of the current century, many reconstructions of the AMOC rely on linear relationships to the more readily observed surface properties of the Atlantic rather than the less explored deeper ocean. Yet, it is conceptually well established that the density distribution is dynamically closely related to the AMOC, and in this contribution, we investigate this connection in model simulations to identify which density information is necessary to reconstruct the AMOC. We chose to establish these links in a data-driven approach. 

We use simulations from a historically forced large ensemble as well as abruptly forced long term simulations with varying strength of forcing and therefore comprising vastly different states of the AMOC. In a first step, we train uncertainty-aware neural networks to infer the state of the AMOC from the density information at different layers in the North Atlantic. In a second step, we compare the performance of the trained neural networks across depth and with their linear counterparts in simulations that were not part of the training process. Finally, we investigate how the networks arrived at their specific prediction using Layer-Wise-Relevance Propagation (LRP), a recently developed technique that propagates relevance backwards through the network to the input density field, effectively filtering out important from unimportant information and identifying regions of high relevance for the reconstruction of the AMOC.

Our preliminary results show that in general, the information provided by only one density layer between the surface and 1100 m is sufficient to reconstruct the AMOC with high precision, and neural networks are capable of generalizing to unseen simulations. From the set of these neural networks trained on different layers, we choose the surface layer as well as one subsurface layer close to 1000 m for further investigation of their decision-making process using LRP. Our preliminary investigation reveals that the LRP in the subsurface layer identifies regions of potentially high physical relevance for the AMOC. By contrast, the regions identified in the surface layer show little physical relevance for the AMOC.

How to cite: Mayer, B., Barnes, E., Marotzke, J., and Baehr, J.: Reconstructing the Atlantic Meridional Overturning Circulation in Earth System Model simulations from density information using explainable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8411, https://doi.org/10.5194/egusphere-egu22-8411, 2022.

EGU22-8454 | Presentations | ITS2.7/AS5.2

Using Generative Adversarial Networks (GANs) to downscale tropical cyclone precipitation. 

Emily Vosper, Dann Mitchell, Peter Watson, Laurence Aitchison, and Raul Santos-Rodriguez

Fluvial flood hazards from tropical cyclones (TCs) are frequently the leading cause of mortality and damages (Rezapour and Baldock, 2014). Accurately modeling TC precipitation is vital for studying the current and future impacts of TCs. However, general circulation models at typical resolution struggle to accurately reproduce TC rainfall, especially for the most extreme storms (Murakami et al., 2015). Increasing horizontal resolution can improve precipitation estimates (Roberts et al., 2020; Zhang et al., 2021), but as these methods are computationally expensive there is a trade-off between accuracy and generating enough ensemble members to generate sufficient high impact, low probability events. Often, downscaling models are used as a computationally cheaper alternative. 

Here, we downscale TC precipitation data from 100 km to 10 km resolution using a generative adversarial network (GAN). Generative approaches have the potential to well reproduce the fine spatial detail and stochastic nature of precipitation (Ravuri et al., 2021). Using observational products for tracking (IBTrACS) and rainfall (MSWEP), we train our GAN over the historical period 1979 - 2020. We are interested in how well our model reproduces precipitation intensity and structure with a focus on the most extreme events, where models have traditionally struggled. 

Bibliography 

Murakami, H., et al., 2015. Simulation and Prediction of Category 4 and 5 Hurricanes in the High-Resolution GFDL HiFLOR Coupled Climate Model*. Journal of Climate, 28(23), pp.9058-9079. 

Ravuri, S., et al., 2021. Skilful precipitation nowcasting using deep generative models of radar. Nature, 597(7878), pp.672-677. 

Rezapour, M. and Baldock, T., 2014. Classification of Hurricane Hazards: The Importance of Rainfall. Weather and Forecasting, 29(6), pp.1319-1331. 

Roberts, M., et al., 2020. Impact of Model Resolution on Tropical Cyclone Simulation Using the HighResMIP–PRIMAVERA Multimodel Ensemble. Journal of Climate, 33(7), pp.2557-2583. 

Zhang, W., et al., 2021. Tropical cyclone precipitation in the HighResMIP atmosphere-only experiments of the PRIMAVERA Project. Climate Dynamics, 57(1-2), pp.253-273. 

How to cite: Vosper, E., Mitchell, D., Watson, P., Aitchison, L., and Santos-Rodriguez, R.: Using Generative Adversarial Networks (GANs) to downscale tropical cyclone precipitation., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8454, https://doi.org/10.5194/egusphere-egu22-8454, 2022.

EGU22-8499 | Presentations | ITS2.7/AS5.2 | Highlight

Matryoshka Neural Operators: Learning Fast PDE Solvers for Multiscale Physics 

Björn Lütjens, Catherine H. Crawford, Campbell Watson, Chris Hill, and Dava Newman

Running a high-resolution global climate model can take multiple days on the world's largest supercomputers. Due to the long runtimes that are caused by solving the underlying partial differential equations (PDEs), climate researchers struggle to generate ensemble runs that are necessary for uncertainty quantification or exploring climate policy decisions.

 

Physics-informed neural networks (PINNs) promise a solution: they can solve single instances of PDEs up to three orders of magnitude faster than traditional finite difference numerical solvers. However, most approaches in physics-informed machine learning learn the solution of PDEs over the full spatio-temporal domain, which requires infeasible amounts of training data, does not exploit knowledge of the underlying large-scale physics, and reduces model trust. Our philosophy is to limit learning to the hard-to-model parts. Hence, we are proposing a novel method called \textit{matryoshka neural operator} that leverages an old scheme called super-parametrizations developed in geophysical fluid dynamics. Using this scheme our proposed physics-informed architecture exploits knowledge of approximate large-scale dynamics and only learns the influence of small-scale dynamics onto large-scale dynamics, also called subgrid parametrizations.

 

Some work in geophysical fluid dynamics is conceptually similar, but fully relies on neural networks which can only operate on fixed grids (Gentine et al., 2018). We are the first to learn grid-independent subgrid parametrizations by leveraging neural operators that learn the dynamics in a grid-independent latent space. Neural operators can be seen as an extension of neural networks to infinite-dimensions: They encode infinite-dimensional inputs into a finite-dimensional representations, such as Eigen or Fourier modes, and learn the nonlinear temporal dynamics in the encoded state.

 

We demonstrate the neural operators for learning non-local subgrid parametrizations over the full large-scale domain of the two-scale Lorenz96 equation. We show that the proposed learning-based PDE solver is grid-independent, has quasilinear instead of quadratic complexity in comparison to a fully-resolving numerical solver, is more accurate than current neural network or polynomial-based parametrizations, and offers interpretability through Fourier modes.

 

Gentine, P., Pritchard, M., Rasp, S., Reinaudi, G., and Yacalis, G. (2018). Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45, 5742– 5751. https://doi.org/10.1029/2018GL078202

How to cite: Lütjens, B., Crawford, C. H., Watson, C., Hill, C., and Newman, D.: Matryoshka Neural Operators: Learning Fast PDE Solvers for Multiscale Physics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8499, https://doi.org/10.5194/egusphere-egu22-8499, 2022.

EGU22-8649 | Presentations | ITS2.7/AS5.2

Physically Based Deep Learning Framework to Model Intense Precipitation Events at Engineering Scales 

Bernardo Teufel, Fernanda Carmo, Laxmi Sushama, Lijun Sun, Naveed Khaliq, Stephane Belair, Asaad Yahia Shamseldin, Dasika Nagesh Kumar, and Jai Vaze

The high computational cost of super-resolution (< 250 m) climate simulations is a major barrier for generating climate change information at such high spatial and temporal resolutions required by many sectors for planning local and asset-specific climate change adaptation strategies. This study couples machine learning and physical modelling paradigms to develop a computationally efficient simulator-emulator framework for generating super-resolution climate information. To this end, a regional climate model (RCM) is applied over the city of Montreal, for the summers of 2015 to 2020, at 2.5 km (i.e., low resolution – LR) and 250 m (i.e., high resolution – HR), which is used to train and validate the proposed super-resolution deep learning (DL) model. In the field of video super-resolution, convolutional neural networks combined with motion compensation have been used to merge information from multiple LR frames to generate high-quality HR images. In this study, a recurrent DL approach based on passing the generated HR estimate through time helps the DL model to recreate fine details and produce temporally consistent fields, resembling the data assimilation process commonly used in numerical weather prediction. Time-invariant HR surface fields and storm motion (approximated by RCM-simulated wind) are also considered in the DL model, which helps further improve output realism. Results suggest that the DL model is able to generate HR precipitation estimates with significantly lower errors than other methods used, especially for intense short-duration precipitation events, which often occur during the warm season and are required to evaluate climate resiliency of urban storm drainage systems. The generic and flexible nature of the developed framework makes it even more promising as it can be applied to other climate variables, periods and regions.

How to cite: Teufel, B., Carmo, F., Sushama, L., Sun, L., Khaliq, N., Belair, S., Shamseldin, A. Y., Nagesh Kumar, D., and Vaze, J.: Physically Based Deep Learning Framework to Model Intense Precipitation Events at Engineering Scales, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8649, https://doi.org/10.5194/egusphere-egu22-8649, 2022.

EGU22-8656 | Presentations | ITS2.7/AS5.2 | Highlight

Conditional normalizing flow for predicting the occurrence of rare extreme events on long time scales 

Jakob Kruse, Beatrice Ellerhoff, Ullrich Köthe, and Kira Rehfeld

The socio-economic impacts of rare extreme events, such as droughts, are one of the main ways in which climate affects humanity. A key challenge is to quantify the changing risk of once-in-a-decade or even once-in-a-century events under global warming, while leaning heavily on comparatively short observation spans. The predictive power of classical statistical methods from extreme value theory (EVT) often remains limited to uncorrelated events with short return periods. This is mainly due to their strong assumption of an underlying exponential family distribution of the variable in question. Standard EVT is therefore at odds with the rich and large-scale correlations found in various surface climate parameters such as local temperatures, as well as the more complex shape of empirical distributions. Here, we turn to recent developments in machine learning, namely to conditional normalizing flows, which are flexible neural networks for modeling highly-correlated unknown distributions. Given a short time series, we show how such networks can model the posterior probability of events whose return periods are much longer than the observation span. The necessary correlations and patterns can be extracted from a paired set of inputs, i.e. time series, and outputs, i.e. return periods. To evaluate this approach in a controlled setting, we generate synthetic training data by sampling temporally autoregressive processes with a non-trivial covariance structure. We compare the results to a baseline analysis using EVT. In this work, we focus on the prediction of return periods of rare statistical events. However, we expect the same potential for a wide range of statistical measures, such as the power spectrum and rate functions. Future work should also investigate its applicability to compound and spatially extended events, as well as changing conditions under warming scenarios.

How to cite: Kruse, J., Ellerhoff, B., Köthe, U., and Rehfeld, K.: Conditional normalizing flow for predicting the occurrence of rare extreme events on long time scales, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8656, https://doi.org/10.5194/egusphere-egu22-8656, 2022.

EGU22-8848 | Presentations | ITS2.7/AS5.2 | Highlight

Defining regime specific cloud sensitivities using the learnings from machine learning 

Alyson Douglas and Philip Stier

Clouds remain a core uncertainty in quantifying Earth’s climate sensitivity due to their complex dynamical and microphysical  interactions with multiple components of the Earth system. Therefore it is pivotal to observationally constrain possible cloud changes in a changing climate in order to evaluate our current generation of Earth system models by a set of physically realistic sensitivities. We developed a novel observational regime framework from over 15 years of MODIS satellite observations, from which we have derived a set of regimes of cloud controlling factors. These regimes were established using the relationship strength, as measured by using the weights of a trained, simple machine learning model. We apply these as observational constraints on the ​​r1i1p1f1 and r1i1p1f3 historical runs from various CMIP6 models to test if CMIP6 climate models can accurately represent key cloud controlling factors.. Within our regime framework, we can compare the observed environmental drivers and sensitivities of each regime against the parameterization-driven, modeled outcomes. We find that, for almost every regime, CMIP6 models do not properly represent the global distribution of occurrence, raising into question how much we can trust our range of climate sensitivities when specific cloud controlling factors are so badly represented by these models. This is especially pertinent in southern ocean and marine stratocumulus regimes, as the changes in these clouds’ optical depths and cloud amount have increased the ECS from CMIP5 to CMIP6. Our results suggest that these uncertainties in CMIP6 cloud parameterizations propagate into derived cloud feedbacks and ultimately climate sensitivity, which is evident from a regimed based analysis of cloud controlling factors.

How to cite: Douglas, A. and Stier, P.: Defining regime specific cloud sensitivities using the learnings from machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8848, https://doi.org/10.5194/egusphere-egu22-8848, 2022.

EGU22-9112 | Presentations | ITS2.7/AS5.2

Causal Orthogonal Functions: A Causal Inference approach to temporal feature extraction 

Nicolas-Domenic Reiter, Jakob Runge, and Andreas Gerhardus

Understanding complex dynamical systems is a major challenge in many scientific disciplines. There are two aspects which are of particular interest when analyzing complex dynamical systems: 1) the temporal patterns along which they evolve and 2) the governing causal mechanisms.

Temporal patterns in a time-series can be extracted and analyzed through a variety of time-series representations, that is a collection of filters. Discrete Wavelet and Fourier Transforms are prominent examples and have been widely applied to investigate the temporal structure of dynamical systems.

Causal Inference is a framework formalizing questions of cause and effect. In this work we propose an elementary and systematic approach to combine time-series representations with Causal Inference. Hereby we introduce a notion of cause and effect with respect to a pair of arbitrary time-series filters. Using a Singular Value Decomposition we derive an alternative representation of how one process drives another over a specified time-period. We call the building blocks of this representation Causal Orthogonal Functions. Combining the notion of Causal Orthogonal Functions with a Wavelet or Fourier decomposition of a time-series yields time-scale specific Causal Orthogonal Functions. As a result we obtain a time-scale specific representation of the causal influence one process has on another over some fixed time-period. This allows to conduct causal effect analysis in discrete-time stochastic dynamical systems at multiple time-scales. We illustrate our approach by examining linear VAR processes.

How to cite: Reiter, N.-D., Runge, J., and Gerhardus, A.: Causal Orthogonal Functions: A Causal Inference approach to temporal feature extraction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9112, https://doi.org/10.5194/egusphere-egu22-9112, 2022.

Outliers detection generally aims at identifying extreme events and insightful changes in climate behavior. One important type of outlier is pattern outlier also called discord, where the outlier pattern detected covers a time interval instead of a single point in the time series. Machine learning contributes many algorithms and methods in this field especially unsupervised algorithms for different types of data time series. In a first submitted paper, we have investigated discord detection applied to climate-related impact observations. We have introduced the prominent discord notion, a contextual concept that derives a set of insightful discords by identifying dependencies among variable length discords, and which is ordered based on the number of discords they subsume. 

Following this study, here we propose a ranking function based on the length of the first subsumed discord and the total length of the prominent discord, and make use of the powerful matrix profile technique. Preliminary results show that our approach, applied to monthly runoff timeseries between 1902 and 2005 over West Africa, detects both the emergence of long term change with the associated former climate regime, and the regional driest decade (1982-1992) of the 20th century (i.e. climate extreme event). In order to demonstrate the genericity and multiple insights gained by our method, we go further by evaluating the approach on other impact (e.g. crop data, fires, water storage) and climate (precipitation and temperature) observations, to provide similar results on different variables, extract relationships among them and identify what constitutes a prominent discord in such cases. A further step will consist in evaluating our methodology on climate and impact historical simulations, to determine if prominent discords highlighted in observations can be captured in climate and impact models.

How to cite: El Khansa, H., Gervet, C., and Brouillet, A.: Prominent discords in climate data through matrix profile techniques: detecting emerging long term pattern changes and anomalous events , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9250, https://doi.org/10.5194/egusphere-egu22-9250, 2022.

EGU22-9281 | Presentations | ITS2.7/AS5.2

Machine learning-based identification and classification of ocean eddies 

Eike Bolmer, Adili Abulaitijiang, Jürgen Kusche, Luciana Fenoglio-Marc, Sophie Stolzenberger, and Ribana Roscher

The automatic detection and tracking of mesoscale ocean eddies, the ‘weather of the ocean’, is a well-known task in oceanography. These eddies have horizontal scales from 10 km up to 100 km and above. They transport water mass, heat, nutrition, and carbon and have been identified as hot spots of biological activity. Monitoring eddies is therefore of interest among others to marine biologists and fishery. 
Recent advances in satellite-based observation for oceanography such as sea surface height (SSH) and sea surface temperature (SST) result in a large supply of different data products in which eddies are visible. In radar altimetry observations are acquired with repeat cycles between 10 and 35 days and cross-track spacing of a few 10 km to a few 100 km. Therefore, ocean eddies are clearly visible but typically covered by only one ground track. In addition, due to their motion, eddies are difficult to reconstruct, which makes creating detailed maps of the ocean with a high temporal resolution a challenge. In general, they are considered a perturbation, and their influence on altimetry data is difficult to determine, which is especially limiting for the determination of an accurate time-averaged dynamic topography of the ocean.
Due to their spatio-temporal dynamic behavior the identification and tracking are challenging. There is a number of methods that have been developed to identify and track eddies in gridded maps of sea surface height derived from multi-mission data sets. However, these procedures have shortcomings since the gridding process removes information that is valuable in achieving more accurate results.
Therefore, in the project EDDY carried out at the University of Bonn we intend to use ground track data from satellite altimetry and - as a long-term goal - additional remote sensing data such as SST, optical imagery, as well as statistical information from model outputs. The combination of the data will serve as a basis for a multi-modal deep learning algorithm. In detail, we will utilize transformers, a deep neural network architecture, that originates from the field of Natural Language Processing (NLP) and became popular in recent years in the field of computer vision. This method shows promising results in terms of understanding temporal and spatial information, which is essential in detecting and tracking highly dynamic eddies.
In this presentation, we introduce the deep neural network used in the EDDY project and show the results based on gridded data sets for the Gulf stream area for the period 2017 and first results of single-track eddy identification in the region.

How to cite: Bolmer, E., Abulaitijiang, A., Kusche, J., Fenoglio-Marc, L., Stolzenberger, S., and Roscher, R.: Machine learning-based identification and classification of ocean eddies, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9281, https://doi.org/10.5194/egusphere-egu22-9281, 2022.

EGU22-9461 | Presentations | ITS2.7/AS5.2

Data Driven Approaches for Climate Predictability 

Balasubramanya Nadiga

Reduced-order dynamical models play a central role in developing our understanding of predictability of climate. In this context, the Linear Inverse Modeling (LIM) approach (closely related to Dynamic Mode Decomposition DMD), by helping capture a few essential interactions between dynamical components of the full system, has proven valuable in being able to give insights into the dynamical behavior of the full system. While nonlinear extensions of the LIM approach have been attempted none have gained widespread acceptance. We demonstrate that Reservoir Computing (RC), a form of machine learning suited for learning in the context of chaotic dynamics, by exploiting the phenomenon of generalized synchronization, provides an alternative nonlinear approach that comprehensively outperforms the LIM approach.  Additionally, the potential of the RC approach to capture the structure of the climatological attractor and to continue the evolution of the system on the attractor in a realistic fashion long after the ensemble average has stopped tracking the reference trajectory is highlighted. Finally, other dynamical systems based methods and probabilistic deep learning methods are considered and a broader perspective on the use of data-driven methods in understanding climate predictability is offered

How to cite: Nadiga, B.: Data Driven Approaches for Climate Predictability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9461, https://doi.org/10.5194/egusphere-egu22-9461, 2022.

EGU22-9877 | Presentations | ITS2.7/AS5.2

A Conditional Generative Adversarial Network for Rainfall Downscaling 

Marcello Iotti, Paolo Davini, Jost von Hardenberg, and Giuseppe Zappa

Predicting extreme precipitation events is one of the main challenges of climate science in this decade. Despite the continuously increasing computing availability, Global Climate Models’ (GCMs) spatial resolution is still too coarse to correctly represent and predict small-scale phenomena as convection, so that precipitation prediction is still imprecise. Indeed, precipitation shows variability on both spatial and temporal scales (much) smaller than the current state-of-the-art GCMs resolution. Therefore, downscaling techniques play a crucial role, both for the understanding of the phenomenon itself and for applications like e.g. hydrologic studies, risk prediction and emergency management. Seen in the context of image processing, a downscaling procedure has many similarities with super-resolution tasks, i.e. the improvement of the resolution of an image. This scope has taken advantage from the application of Machine Learning techniques, and in particular from the introduction of Convolutional Neural Networks (CNNs).

In our work we exploit a conditional Generative Adversarial Network (cGAN) to train a generator model to perform precipitation downscaling. This generator, a deep CNN, takes as input the precipitation field at the scale resolved by GCMs, adds random noise, and outputs a possible realization of the precipitation field at higher resolution, preserving its statistical properties with respect to the coarse-scale field. The GAN is being trained and tested in a “perfect model” setup, in which we try to reproduce the ERA5 precipitation field starting from an upscaled version of it.

Compared to other downscaling techniques, our model has the advantage of being computationally inexpensive at run time, since the computational load is mostly concentrated in the training phase. We are examining the Greater Alpine Region, upon which numerical models performances are limited by the complex orography. Nevertheless the approach, being independent of physical, statistical and empirical assumptions, can be easily extended to different domains.

How to cite: Iotti, M., Davini, P., von Hardenberg, J., and Zappa, G.: A Conditional Generative Adversarial Network for Rainfall Downscaling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9877, https://doi.org/10.5194/egusphere-egu22-9877, 2022.

EGU22-10120 | Presentations | ITS2.7/AS5.2

A Convolutional Neural Network approach for downscaling climate model data in Trentino-South Tyrol (Eastern Italian Alps) 

Alice Crespi, Daniel Frisinghelli, Tatiana Klisho, Marcello Petitta, Alexander Jacob, and Massimiliano Pittore

Statistical downscaling is a very popular technique to increase the spatial resolution of existing global and regional climate model simulations and to provide reliable climate data at local scale. The availability of tailored information is particularly crucial for conducting local climate assessments, climate change studies and for running impact models, especially in complex terrain. A crucial requirement is the ability to reliably downscale the mean, variability and extremes of climate data, while preserving their spatial and temporal correlations.

Several machine learning-based approaches have been proposed so far to perform such task by extracting non-linear relationships between local-scale variables and large-scale atmospheric predictors and they could outperform more traditional statistical methods. In recent years, deep learning has gained particular interest in geoscientific studies and climate science as a promising tool to improve climate downscaling thanks to its greater ability to extract high-level features from large datasets using complex hierarchical architectures. However, the proper network architecture is highly dependent on the target variable, time and spatial resolution, as well as application purposes and target domain.

This contribution presents a Deep Convolutional Encoder-Decoder Network (DCEDN) architecture which was implemented and evaluated for the first time over Trentino-South Tyrol in the Eastern Italian Alps to derive 1-km climate fields of daily temperature and precipitation from ERA-5 reanalysis. We will show that in-depth optimization of hyper-parameters, loss function choice and sensitivity analyses are essential preliminary steps to derive an effective architecture and enhance the interpretability of results and of their variability. The validation of downscaled fields of both temperature and precipitation confirmed the improved representation of local features for both mean and extreme values, even though lower performances were obtained for precipitation in reproducing small-scale spatial features. In all cases, DCEDN was found to outperform classical schemes based on linear regression and the bias adjustment procedures used as benchmarks. We will discuss in detail the advantages and recommendations for the integration of DCEDN as an efficient post-processing block in climate data simulations supporting local-scale studies. The model constraints in feature extraction, especially for precipitation, over the limited extent of the study domain will also be explained along with potential future developments of such type of networks for improved climate science applications.

How to cite: Crespi, A., Frisinghelli, D., Klisho, T., Petitta, M., Jacob, A., and Pittore, M.: A Convolutional Neural Network approach for downscaling climate model data in Trentino-South Tyrol (Eastern Italian Alps), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10120, https://doi.org/10.5194/egusphere-egu22-10120, 2022.

EGU22-10773 | Presentations | ITS2.7/AS5.2 | Highlight

Choose your own weather adventure: deep weather generation for “what-if” climate scenarios 

Campbell Watson, Jorge Guevara, Daniela Szwarcman, Dario Oliveira, Leonardo Tizzei, Maria Garcia, Priscilla Avegliano, and Bianca Zadrozny

Climate change is making extreme weather more extreme. Given the inherent uncertainty of long-term climate projections, there is growing need for rapid, plausible “what-if” climate scenarios to help users understand climate exposure and examine resilience and mitigation strategies. Since the 1980s, such “what-if” scenarios have been created using stochastic weather generators. However, it is very challenging for traditional weather generation algorithms to create realistic extreme climate scenarios because the weather data being modeled is highly imbalanced, contains spatiotemporal dependencies and has extreme weather events exacerbated by a changing climate.

There are few works comparing and evaluating stochastic multisite (i.e., gridded) weather generators, and no existing work that compares promising deep learning approaches for weather generation with classical stochastic weather generators. We will present the culmination of a multi-year effort to perform a systematic evaluation of stochastic weather generators and deep generative models for multisite precipitation synthesis. Among other things, we show that variational auto-encoders (VAE) offer an encouraging pathway for efficient and controllable climate scenario synthesis – especially for extreme events. Our proposed VAE schema selects events with different characteristics in the normalized latent space (from rare to common) and generates high-quality scenarios using the trained decoder. Improvements are provided via latent space clustering and bringing histogram-awareness to the VAE loss.

This research will serve as a guide for improving the design of deep learning architectures and algorithms for application in Earth science, including feature representation and uncertainty quantification of Earth system data and the characterization of so-called “grey swan” events.

How to cite: Watson, C., Guevara, J., Szwarcman, D., Oliveira, D., Tizzei, L., Garcia, M., Avegliano, P., and Zadrozny, B.: Choose your own weather adventure: deep weather generation for “what-if” climate scenarios, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10773, https://doi.org/10.5194/egusphere-egu22-10773, 2022.

EGU22-10888 | Presentations | ITS2.7/AS5.2

How to utilize deep learning to understand climate dynamics? : An ENSO example. 

Na-Yeon Shin, Yoo-Geun Ham, Jeong-Hwan Kim, Minsu Cho, and Jong-Seong Kug

Many deep learning technologies have been applied to the Earth sciences, including weather forecast, climate prediction, parameterization, resolution improvements, etc. Nonetheless, the difficulty in interpreting deep learning results still prevents their applications to studies on climate dynamics. Here, we applied a convolutional neural network to understand El Niño–Southern Oscillation (ENSO) dynamics from long-term climate model simulations. The deep learning algorithm successfully predicted ENSO events with a high correlation skill of 0.82 for a 9-month lead. For interpreting deep learning results beyond the prediction skill, we first developed a “contribution map,” which estimates how much each grid point and variable contribute to a final output variable. Furthermore, we introduced a “sensitivity,” which estimates how much the output variable is sensitively changed to the small perturbation of the input variables by showing the differences in the output variables. The contribution map clearly shows the most important precursors for El Niño and La Niña developments. In addition, the sensitivity clearly reveals nonlinear relations between the precursors and the ENSO index, which helps us understand the respective role of each precursor. Our results suggest that the contribution map and sensitivity would be beneficial for understanding other climate phenomena.

How to cite: Shin, N.-Y., Ham, Y.-G., Kim, J.-H., Cho, M., and Kug, J.-S.: How to utilize deep learning to understand climate dynamics? : An ENSO example., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10888, https://doi.org/10.5194/egusphere-egu22-10888, 2022.

EGU22-11111 | Presentations | ITS2.7/AS5.2

Machine learning based estimation of regional Net Ecosystem Exchange (NEE) constrained by atmospheric inversions and ecosystem observations 

Samuel Upton, Ana Bastos, Fabian Gans, Basil Kraft, Wouter Peters, Jacob Nelson, Sophia Walther, Martin Jung, and Markus Reichstein

Accurate estimates and predictions of the global carbon fluxes are critical for our understanding of the global carbon cycle and climate change. Reducing the uncertainty of the terrestrial carbon sink and closing the budget imbalance between sources and sinks would improve our ability to accurately project future climate change. Net Ecosystem Exchange (NEE), the net flux of biogenic carbon from the land surface to the atmosphere, is only directly measured at a sparse set of globally distributed eddy-covariance measurement sites. To estimate the terrestrial carbon flux at the regional and global scale, a global gridded estimate of NEE must be accurately upscaled from a model trained at the ecosystem level. In this study, the Fluxcom system* is used to train a site-level model on remotely-sensed and meteorological variables derived from site measurements, MODIS and ECMWF ERA5 atmospheric reanalysis data. The non-representative distribution of these site-level data along with missing disturbance histories impart known biases to current upscaling efforts. Observations of atmospheric carbon may provide important additional information, improving the accuracy of the upscaled flux estimate. 

This study adds an atmospheric observational operator to the model training process that connects the ecosystem-level flux model to top-down observations of atmospheric carbon by adding an additional term to the objective function. The target data are regionally integrated fluxes from an ensemble of atmospheric inversions corrected for fossil-fuel emissions and lateral fluxes.  Calculating the regionally integrated flux estimate at each training step is computationally infeasible. Our hypothesis is that the regional flux can be modeled with a limited set of points and that this sparse model preserves sufficient information about the phenomena to act as a constraint for the underlying ecosystem-level model, improving regional and global upscaled products.  Experimental results show improvements in the machine learning based regional estimates of NEE while preserving features such as the seasonal variability in the estimated flux.

 

*Jung, Martin, Christopher Schwalm, Mirco Migliavacca, Sophia Walther, Gustau Camps-Valls, Sujan Koirala, Peter Anthoni, et al. 2020. “Scaling Carbon Fluxes from Eddy Covariance Sites to Globe: Synthesis and Evaluation of the FLUXCOM Approach.” Biogeosciences 17 (5): 1343–65. 

 

How to cite: Upton, S., Bastos, A., Gans, F., Kraft, B., Peters, W., Nelson, J., Walther, S., Jung, M., and Reichstein, M.: Machine learning based estimation of regional Net Ecosystem Exchange (NEE) constrained by atmospheric inversions and ecosystem observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11111, https://doi.org/10.5194/egusphere-egu22-11111, 2022.

EGU22-11216 | Presentations | ITS2.7/AS5.2

Unsupervised clustering of Lagrangian trajectories in the Labrador Current 

Noémie Planat and Mathilde Jutras

Lagrangian studies are a widely-used and powerful way to analyse and interpret phenomenons in oceanography and atmospheric sciences. Such studies can be based on dataset either consisting of real trajectories (e.g. oceanic drifters or floats) or of virtual trajectories computed from velocity outputs from model or observation-derived velocities. Such data can help investigate pathways of water masses, pollutants or storms, or identify important convection areas to name a few. As many of these analyses are based on large volumes of data that can be challenging to examine, machine learning can provide an efficient and automated way to classify information or detect patterns.

Here, we present an application of unsupervised clustering to the identification of the main pathways of the shelf-break branch of the Labrador Current, a critical component of the North Atlantic circulation. The current flows southward along the Labrador Shelf and splits in the region of the Grand Banks, either retroflecting north-eastward and feeding the subpolar basin of the North Atlantic Ocean (SPNA) or continuing westward along the shelf-break, feeding the Slope Sea and the east coast of North America. The proportion feeding each area impacts their salinity and convection, as well as their biogeochemistry, with consequences on marine life.

Our dataset is composed of millions of virtual particle trajectories computed from the water velocities of the GLORYS12 ocean reanalysis. We implement an unsupervised Machine Learning clustering algorithm on the shape of the trajectories. The algorithm is a kernalized k-means++ algorithm with a minimal number of hyperparameters, coupled to a kernalized Principal Component Analysis (PCA) features reduction. We will present the pre-processing of the data, as well as canonical and physics-based methods for choosing the hyperparameters. 

The algorithm identifies six main pathways of the Labrador Current. Applying the resulting classification method to 25 years of ocean reanalysis, we quantify the relative importance of the six pathways in time and construct a retroflection index that is used to study the drivers of the retroflection variability. This study highlights the potential of such a simple clustering method for Lagrangian trajectory analysis in oceanography or in other climate applications.

How to cite: Planat, N. and Jutras, M.: Unsupervised clustering of Lagrangian trajectories in the Labrador Current, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11216, https://doi.org/10.5194/egusphere-egu22-11216, 2022.

EGU22-11388 | Presentations | ITS2.7/AS5.2 | Highlight

Learning ENSO-related Principal Modes of Vegetation via a Granger-Causal Variational Autoencoder 

Gherardo Varando, Miguel-Ángel Fernández-Torres, and Gustau Camps-Valls

Tackling climate change needs to understand the complex phenomena occurring on the Planet. Discovering  teleconnection patterns is an essential part of the endeavor. Events like El Niño Southern Oscillation (ENSO) impact essential climate variables at large distances, and influence the underlying Earth system dynamics. However, their automatic identification from the wealth of observational data is still unresolved. Nonlinearities, nonstationarities and the (ab)use of correlation analyses hamper the discovery of true causal patterns.  Classical approaches proceed by first, extracting principal modes of variability and second, by performing lag-correlations or Granger causal analysis to identify possible teleconnections. While the principal modes are an effective representation of the data, they could be causally not meaningful. 
To address this, we here introduce a deep learning methodology that extracts nonlinear latent representations from spatio-temporal Earth data that are Granger causal with the index altogether. The proposed algorithm consists of a variational autoencoder trained with an additional causal penalization that enforces the latent representation to be (partially) Granger-causally related to the considered signal. The causal loss term is obtained by training two additional autoregressive models to forecast some of the latent signals, one of them including the target signal as predictor. The causal penalization is finally computed by comparing the log variances of the two autoregressive models, similarly to the standard Granger causality approach. 

The major drawback of deep autoencoders with respect to the classical linear principal component approaches is the lack of a straightforward interpretability of the representations learned. 
To address this point we perform synthetic interventions in the latent space and analyse the differences in the recovered NDVI signal.
We illustrate the feasibility of the approach described to study the impact of ENSO on vegetation, which allows for a more rigorous study of impacts on ecosystems globally. The output maps show NDVI patterns which are consistent with the known phenomena induced by El Niño event. 

How to cite: Varando, G., Fernández-Torres, M.-Á., and Camps-Valls, G.: Learning ENSO-related Principal Modes of Vegetation via a Granger-Causal Variational Autoencoder, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11388, https://doi.org/10.5194/egusphere-egu22-11388, 2022.

EGU22-11451 | Presentations | ITS2.7/AS5.2

Time evolution of temperature profiles retrieved from 13 years of IASI data using an artificial neural network 

Marie Bouillon, Sarah Safieddine, Simon Whitburn, Lieven Clarisse, Filipe Aires, Victor Pellet, Olivier Lezeaux, Noëlle A. Scott, Marie Doutriaux-Boucher, and Cathy Clerbaux

The IASI remote sensor measures Earth’s thermal infrared radiation over 8461 channels between 645 and 2760 cm-1. Atmospheric temperatures at different altitudes can be retrieved from the radiances measured in the CO2 absorption bands (645-800 cm-1 and 2250-2400 cm-1) by selecting the channels that are the most sensitive to the temperature profile. The three IASI instruments on board of the Metop suite of satellites launched in 2006, 2012 and 2018, will provide a long time series for temperature, adequate for studying the long term evolution of atmospheric temperature. However, over the past 14 years, EUMETSAT, who processes radiances and computes atmospheric temperatures, has carried out several updates on the processing algorithms for both radiances and temperatures, leading to non-homogeneous time series and thus large difficulties in the computation of trends for temperature and atmospheric composition.

 

In 2018, EUMETSAT has reprocessed the radiances with the most recent version of the algorithm and there is now a homogeneous radiance dataset available. In this study, we retrieve a new temperature record from the homogeneous IASI radiances using an artificial neural network (ANN). We train the ANN with IASI radiances as input and the European Centre for Medium-Range Weather Forecasts reanalysis ERA5 temperatures as output. We validate the results using ERA5 and in situ radiosonde temperatures from the ARSA database. Between 750 and 7 hPa, where IASI has most of its sensitivity, a very good agreement is observed between the 3 datasets. This work suggests that ANN can be a simple yet powerful tool to retrieve IASI temperatures at different altitudes in the upper troposphere and in the stratosphere, allowing us to construct a homogeneous and consistent temperature data record.

 

We use this new dataset to study extreme events such as sudden stratospheric warmings, and to compute trends over the IASI coverage period [2008-2020]. We find that in the past thirteen years, there is a general warming trend of the troposphere, that is more important at the poles and at mid latitudes (0.5 K/decade at mid latitudes, 1 K/decade at the North Pole). The stratosphere is globally cooling on average, except at the South Pole as a result of the ozone layer recovery and a sudden stratospheric warming in 2019. The cooling is most pronounced in the equatorial upper stratosphere (-1 K/decade).

How to cite: Bouillon, M., Safieddine, S., Whitburn, S., Clarisse, L., Aires, F., Pellet, V., Lezeaux, O., Scott, N. A., Doutriaux-Boucher, M., and Clerbaux, C.: Time evolution of temperature profiles retrieved from 13 years of IASI data using an artificial neural network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11451, https://doi.org/10.5194/egusphere-egu22-11451, 2022.

Existing databases for extreme weather events such as floods, heavy rainfall events, or droughts are heavily reliant on authorities and weather services manually entering details about the occurrence of an event. This reliance has led to a massive geographical imbalance in the likelihood of extreme weather events being recorded, with a vast number of events especially in the developing world remaining unrecorded. With continuing climate change, a lack of systematic extreme weather accounting in developing countries can lead to a substantial misallocation of funds for adaptation measures. To address this imbalance, in this pilot study we combine socio-economic data with climate and geographic data and use several machine-learning algorithms as well as traditional (spatial) econometric tools to predict the occurrence of extreme weather events and their impacts in the absence of information from manual records. Our preliminary results indicate that machine-learning approaches for the detection of the impacts of extreme weather could be a crucial tool in establishing a coherent global disaster record system. Such systems could also play a role in discussions around future Loss and Damages.

How to cite: Schwarz, M. and Pretis, F.: Filling in the Gaps: Consistently detecting previously unidentified extreme weather event impacts, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12165, https://doi.org/10.5194/egusphere-egu22-12165, 2022.

EGU22-12720 | Presentations | ITS2.7/AS5.2 | Highlight

Interpretable Deep Learning for Probabilistic MJO Prediction 

Hannah Christensen and Antoine Delaunay

The Madden–Julian Oscillation (MJO) is the dominant source of sub-seasonal variability in the tropics. It consists of an Eastward moving region of enhanced convection coupled to changes in zonal winds. It is not possible to predict the precise evolution of the MJO, so subseasonal forecasts are generally probabilistic. Ideally the spread of the forecast probability distribution would vary from day to day depending on the instantaneous predictability of the MJO. Operational subseasonal forecasting models do not have this property. We present a deep convolutional neural network that produces skilful state-dependent probabilistic MJO forecasts. This statistical model accounts for intrinsic chaotic uncertainty by predicting the standard deviation about the mean, and model uncertainty using a Monte-Carlo dropout approach. Interpretation of the mean forecasts from the neural network highlights known MJO mechanisms, providing confidence in the model, while interpretation of the predicted uncertainty indicates new physical mechanisms governing MJO predictability.

How to cite: Christensen, H. and Delaunay, A.: Interpretable Deep Learning for Probabilistic MJO Prediction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12720, https://doi.org/10.5194/egusphere-egu22-12720, 2022.

EGU22-12822 | Presentations | ITS2.7/AS5.2

Assessing model dependency in CMIP5 and CMIP6 based on their spatial dependency structure with probabilistic network models 

Catharina Elisabeth Graafland and Jose Manuel Gutiérrez Gutiérrez

Probabilistic network models (PNMs) are well established data-driven modeling and machine learning prediction techniques used in many disciplines, including climate analysis. These techniques can efficiently learn the underlying (spatial) dependency structure and a consistent probabilistic model from data (e.g. gridded reanalysis or GCM outputs for particular variables; near surface temperature in this work), thus constituting a truly probabilistic backbone of the system underlying the data. The complex structure of the dataset is encoded using both pairwise and conditional dependencies and can be explored and characterized using network and probabilistic metrics. When applied to climate data, it is shown that Bayesian networks faithfully reveal the various long‐range teleconnections relevant in the dataset, in particular those emerging in el niño periods (Graafland, 2020).

 

In this work we apply probabilistic Gaussian networks to extract and characterize most essential spatial dependencies of the simulations generated by the different GCMs contributing to CMIP5 and 6 (Eyring 2016). In particular we analyze the problem of model interdependency (Boe, 2018) which poses practical problems for the application of these multi-model simulations in practical applications (it is often not clear what exactly makes one model different from or similar to another model).  We show that probabilistic Gaussian networks provide a promising tool to characterize the spatial structure of GCMs using simple metrics which can be used to analyze how and where differences in dependency structures are manifested. The probabilistic distance measure allows to chart CMIP5 and CMIP6 models on their closeness to reanalysis datasets that rely on observations. The measures also identifies significant atmospheric model changes that underwent CMIP5 GCMs in their transition to CMIP6. 

 

References:

 

Boé, J. Interdependency in Multimodel Climate Projections: Component Replication and Result Similarity. Geophys. Res. Lett. 45, 2771–2779, DOI: 10.1002/2017GL076829 (2018).

 

Eyring, V. et al. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model. Dev. 9, 1937–1958, DOI: 10.5194/gmd-9-1937-2016  (2016).

 

Graafland, C.E., Gutiérrez, J.M., López, J.M. et al. The probabilistic backbone of data-driven complex networks: an example in climate. Sci Rep 10, 11484 (2020). DOI: 10.1038/s41598-020-67970-y



Acknowledgement

 

The authors would like to acknowledge project ATLAS (PID2019-111481RB-I00) funded by MCIN/AEI (doi:10.13039/501100011033). We also acknowledge support from Universidad de Cantabria and Consejería de Universidades, Igualdad, Cultura y Deporte del Gobierno de Cantabria via the “instrumentación y ciencia de datos para sondear la naturaleza del universo” project for funding this work. L.G. acknowledges support from the Spanish Agencia Estatal de Investigación through the Unidad de Excelencia María de Maeztu with reference MDM-2017-0765.



How to cite: Graafland, C. E. and Gutiérrez, J. M. G.: Assessing model dependency in CMIP5 and CMIP6 based on their spatial dependency structure with probabilistic network models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12822, https://doi.org/10.5194/egusphere-egu22-12822, 2022.

EGU22-12858 | Presentations | ITS2.7/AS5.2

Identifying drivers of extreme reductions in carbon uptake of forests with interpretable machine learning 

Mohit Anand, Gustau Camps-Valls, and Jakob Zscheischler

Forests form one of the major components of the carbon cycle and take up large amounts of carbon dioxide from the atmosphere, thereby slowing down the rate of climate change. Carbon uptake by forests is a highly complex process strongly controlled by meteorological forcing, mainly because of two reasons. First, forests have a large storage capacity acting as a buffer to short-duration changes in meteorological drivers. The response can thus be very complex and extend over a long time. Secondly, the responses are often triggered by combinations of multiple compounding drivers including precipitation, temperature and solar radiation. Effects may compound between variables and across time. Therefore, a large amount of data is required to identify the complex drivers of adverse forest response to climate forcing. Recent advances in machine learning offer a suite of promising tools to analyse large amounts of data and address the challenge of identifying complex drivers of impacts. Here we analyse the potential of machine learning to identify the compounding drivers of reduced carbon uptake/forest mortality. To this end, we generate 200,000 years of gross and net carbon uptake from the physically-based forest model FORMIND simulating a beech forest in Germany. The climate data is generated through a weather generator (AWEGEN-1D) from bias-corrected ERA5 reanalysis data.  Classical machine learning models like random forest, support vector machines and deep neural networks are trained to estimate gross primary product. Deep learning models involving convolutional layers are found to perform better than the other classical machine learning models. Initial results show that at least three years of weather data are required to predict annual carbon uptake with high accuracy, highlighting the complex lagged effects that characterize forests. We assess the performance of the different models and discuss their interpretability regarding the identification of impact drivers.



How to cite: Anand, M., Camps-Valls, G., and Zscheischler, J.: Identifying drivers of extreme reductions in carbon uptake of forests with interpretable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12858, https://doi.org/10.5194/egusphere-egu22-12858, 2022.

EGU22-13345 | Presentations | ITS2.7/AS5.2

A novel approach to systematically analyze the error structure of precipitation datasets using decision trees 

Xinxin Sui, Zhi Li, Guoqiang Tang, Zong-Liang Yang, and Dev Niyogi
Multiple environmental factors influence the error structure of precipitation datasets. The conventional precipitation evaluation method over-simply analyzes how the statistical indicators vary with one or two factors via dimensionality reduction. As a result, the compound influences of multiple factors are superposed rather than disassembled. To overcome this deficiency, this study presents a novel approach to systematically and objectively analyze the error structure within precipitation products using decision trees. This data-driven method can analyze multiple factors simultaneously and extract the compound effects of various influencers. By interpreting the decision tree structures, the error characteristics of precipitation products are investigated. Three types of precipitation products (two satellite-based: ‘top-down’ IMERG and ‘bottom-up’ SM2RAIN-ASCAT, and one reanalysis: ERA5-Land) are evaluated across CONUS. The study period is from 2010 to 2019, and the ground-based Stage IV precipitation dataset is used as the ground truth. By data mining 60 binary decision trees, the spatiotemporal pattern of errors and the land surface influences are analyzed.
 
Results indicate that IMERG and ERA5-Land perform better than SM2RAIN-ASCAT with higher accuracy and more stable interannual patterns for the ten years of data analyzed. The conventional bias evaluation finds that ERA5-Land and SM2RAIN-ASCAT underestimate in summer and winter, respectively. The decision tree method cross-assesses three spatiotemporal factors and finds that underestimation of ERA5-Land occurs in the eastern part of the rocky mountains, and SM2RAIN-ASCAT underestimates precipitation over high latitudes, especially in winter. Additionally, the decision tree method ascribes system errors to nine physical variables, of which the distance to the coast, soil type, and DEM are the three dominant features. On the other hand, the land cover classification and the topography position index are two relatively weak factors.

How to cite: Sui, X., Li, Z., Tang, G., Yang, Z.-L., and Niyogi, D.: A novel approach to systematically analyze the error structure of precipitation datasets using decision trees, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13345, https://doi.org/10.5194/egusphere-egu22-13345, 2022.

ITS3 – A new era of Earth and planetary observation: instrumentation for what and to whom?

EGU22-2024 | Presentations | ITS3.1/SSS1.2 | Highlight

Understanding natural hazards in a changing landscape: A citizen science approach in Kigezi highlands, southwestern Uganda 

Violet Kanyiginya, Ronald Twongyirwe, Grace Kagoro, David Mubiru, Matthieu Kervyn, and Olivier Dewitte

The Kigezi highlands, southwestern Uganda, is a mountainous tropical region with a high population density, intense rainfall, alternating wet and dry seasons and high weathering rates. As a result, the region is regularly affected by multiple natural hazards such as landslides, floods, heavy storms, and earthquakes. In addition, deforestation and land use changes are assumed to have an influence on the patterns of natural hazards and their impacts in the region. Landscape characteristics and dynamics controlling the occurrence and the spatio-temporal distribution of natural hazards in the region remain poorly understood. In this study, citizen science has been employed to document and understand the spatial and temporal occurrence of natural hazards that affect the Kigezi highlands in relation to the multi-decadal landscape change of the region. We present the methodological research framework involving three categories of participatory citizen scientists. First, a network of 15 geo-observers (i.e., citizens of local communities distributed across representative landscapes of the study area) was established in December 2019. The geo-observers were trained at using smartphones to collect information (processes and impacts) on eight different natural hazards occurring across their parishes. In a second phase, eight river watchers were selected at watershed level to monitor the stream flow characteristics. These watchers record stream water levels once daily and make flood observations. In both categories, validation and quality checks are done on the collected data for further analysis. Combining with high resolution rainfall monitoring using rain gauges installed in the watersheds, the data are expected to characterize catchment response to flash floods. Lastly, to reconstruct the historical landscape change and natural hazards occurrences in the region, 96 elderly citizens (>70 years of age) were engaged through interviews and focus group discussions to give an account of the evolution of their landscape over the past 60 years. We constructed a historical timeline for the region to complement the participatory mapping and in-depth interviews with the elderly citizens. During the first 24 months of the project, 240 natural hazard events with accurate timing information have been reported by the geo-observers. Conversion from natural tree species to exotic species, increased cultivation of hillslopes, road construction and abandonment of terraces and fallowing practices have accelerated natural hazards especially flash floods and landslides in the region. Complementing with the region’s historical photos of 1954 and satellite images, major landscape dynamics have been detected. The ongoing data collection involving detailed ground-based observations with citizens shows a promising trend in the generation of new knowledge about natural hazards in the region.

How to cite: Kanyiginya, V., Twongyirwe, R., Kagoro, G., Mubiru, D., Kervyn, M., and Dewitte, O.: Understanding natural hazards in a changing landscape: A citizen science approach in Kigezi highlands, southwestern Uganda, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2024, https://doi.org/10.5194/egusphere-egu22-2024, 2022.

EGU22-2929 | Presentations | ITS3.1/SSS1.2

Possible Contributions of Citizen Science in the Development of the Next Generation of City Climate Services 

Peter Dietrich, Uta Ködel, Sophia Schütze, Felix Schmidt, Fabian Schütze, Aletta Bonn, Thora Herrmann, and Claudia Schütze

Human life in cities is already affected by climate change. The effects will become even more pronounced in the coming years and decades. Next-generation of city climate services is necessary for adapting infrastructures and the management of services of cities to climate change. These services are based on advanced weather forecast models and the access to diverse data. It is essential to keep in mind that each citizen is a unique individual with their own peculiarities, preferences, and behaviors. The base for our approach is the individual specific exposure, which considers that people perceive the same conditions differently in terms of their well-being. Individual specific exposure can be defined as the sum of all environmental conditions that affect humans during a given period of time, in a specific location, and in a specific context. Thereby, measurable abiotic parameters such as temperature, humidity, wind speed, pollution and noise are used to characterize the environmental conditions. Additional information regarding green spaces, trees, parks, kinds of streets and buildings, as well as available infrastructures are included in the context. The recording and forecasting of environmental parameters while taking into account the context, as well as the presentation of this information in easy-to-understand and easy-to-use maps, are critical for influencing human behavior and implementing appropriate climate change adaptation measures.

We will adopt this approach within the frame of the recently started, EU-funded CityCLIM project. We aim to develop and implement approaches which will explore the potential of citizen science in terms of current and historical data collecting, data quality assessment and evaluation of data products.  In addition, our approach will also provide strategies for individual climate data use, and the derivation and evaluation of climate change adaptation actions in cities.

In a first step we need to define and to characterize the different potential stakeholder groups involved in citizen science data collection. Citizen science offers approaches that consider citizens as both  organized target groups (e.g., engaged companies, schools) and individual persons (e.g. hobby scientists). An important point to be investigated is the motivation of citizen science stakehoder groups to sustainably collect data and make it available to science and reward them accordingly. For that purpose, strategic tools, such as value proposition canvas analysis, will be applied to taylor the science-to-business and the science-to-customer communications and offers in terms of the individual needs.

How to cite: Dietrich, P., Ködel, U., Schütze, S., Schmidt, F., Schütze, F., Bonn, A., Herrmann, T., and Schütze, C.: Possible Contributions of Citizen Science in the Development of the Next Generation of City Climate Services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2929, https://doi.org/10.5194/egusphere-egu22-2929, 2022.

EGU22-4168 | Presentations | ITS3.1/SSS1.2

Extending Rapid Image Classification with the Picture Pile Platform for Citizen Science 

Tobias Sturn, Linda See, Steffen Fritz, Santosh Karanam, and Ian McCallum

Picture Pile is a flexible web-based and mobile application for ingesting imagery from satellites, orthophotos, unmanned aerial vehicles and/or geotagged photographs for rapid classification by volunteers. Since 2014, there have been 16 different crowdsourcing campaigns run with Picture Pile, which has involved more than 4000 volunteers who have classified around 11.5 million images. Picture Pile is based on a simple mechanic in which users view an image and then answer a question, e.g., do you see oil palm, with a simple yes, no or maybe answer by swiping the image to the right, left or downwards, respectively. More recently, Picture Pile has been modified to classify data into categories (e.g., crop types) as well as continuous variables (e.g., degree of wealth) so that additional types of data can be collected.

The Picture Pile campaigns have covered a range of domains from classification of deforestation to building damage to different types of land cover, with crop type identification as the latest ongoing campaign through the Earth Challenge network. Hence, Picture Pile can be used for many different types of applications that need image classifications, e.g., as reference data for training remote sensing algorithms, validation of remotely sensed products or training data of computer vision algorithms. Picture Pile also has potential for monitoring some of the indicators of the United Nations Sustainable Development Goals (SDGs). The Picture Pile Platform is the next generation of the Picture Pile application, which will allow any user to create their own ‘piles’ of imagery and run their own campaigns using the system. In addition to providing an overview of Picture Pile, including some examples of relevance to SDG monitoring, this presentation will provide an overview of the current status of the Picture Pile Platform along with the data sharing model, the machine learning component and the vision for how the platform will function operationally to aid environmental monitoring.

How to cite: Sturn, T., See, L., Fritz, S., Karanam, S., and McCallum, I.: Extending Rapid Image Classification with the Picture Pile Platform for Citizen Science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4168, https://doi.org/10.5194/egusphere-egu22-4168, 2022.

EGU22-5094 | Presentations | ITS3.1/SSS1.2

Life in undies – Preliminary results of a citizen science data collection targeting soil health assessement in Hungary 

Mátyás Árvai, Péter László, Tünde Takáts, Zsófia Adrienn Kovács, Kata Takács, János Mészaros, and László Pásztor

Last year, the Institute for Soil Sciences, Centre for Agricultural Research launched Hungary's first citizen science project with the aim to obtain information on the biological activity of soils using a simple estimation procedure. With the help of social media, the reactions on the call for applications were received from nearly 2000 locations. 

In the Hungarian version of the international Soil your Undies programme, standardized cotton underwear was posted to the participants with a step-by-step tutorial, who buried their underwear for about 60 days, from mid of May until July in 2021, at a depth of about 20-25 cm. After the excavation, the participants took one digital image of the underwear and recorded the geographical coordinates, which were  uploaded to a GoogleForms interface together with several basic information related to the location and the user (type of cultivation, demographic data etc.).

By analysing digital photos of the excavated undies made by volunteers, we obtained information on the level to which cotton material had decomposed in certain areas and under different types of cultivation. Around 40% of the participants buried the underwear in garden, 21% in grassland, 15% in orchard, 12% in arable land, 5% in vineyard and 4% in forest (for 3% no landuse data was provided).

The images were first processed using Fococlipping and Photoroom softwares for background removing and then percentage of cotton material remaining was estimated based on the pixels by using R Studio ‘raster package’.

The countrywide collected biological activity data from nearly 1200 sites were statistically evaluated by spatially aggregating the data both for physiographical and administrative units. The results have been published on various platforms (Facebook, Instagram, specific web site etc.), and a feedback is also given directly to the volunteers.

According to the experiments the first citizen science programme proved to be successful. 

 

Acknowledgment: Our research was supported by the Hungarian National Research, Development and Innovation Office (NKFIH; K-131820)

Keywords: citizen science; soil life; soil health; biological activity; soil properties

How to cite: Árvai, M., László, P., Takáts, T., Kovács, Z. A., Takács, K., Mészaros, J., and Pásztor, L.: Life in undies – Preliminary results of a citizen science data collection targeting soil health assessement in Hungary, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5094, https://doi.org/10.5194/egusphere-egu22-5094, 2022.

EGU22-5147 | Presentations | ITS3.1/SSS1.2

Distributed databases for citizen science 

Julien Malard-Adam, Joel Harms, and Wietske Medema

Citizen science is often heavily dependent on software tools that allow members of the general population to collect, view and submit environmental data to a common database. While several such software platforms exist, these often require expert knowledge to set up and maintain, and server and data hosting costs can become quite costly in the long term, especially if a project is successful in attracting many users and data submissions. In the context of time-limited project funding, these limitations can pose serious obstacles to the long-term sustainability of citizen science projects as well as their ownership by the community.

One the other hand, distributed database systems (such as Qri and Constellation) dispense with the need for a centralised server and instead rely on the devices (smartphone or computer) of the users themselves to store and transmit community-generated data. This new approach leads to the counterintuitive result that distributed systems, contrarily to centralised ones, become more robust and offer better availability and response times as the size of the user pool grows. In addition, since data is stored by users’ own devices, distributed systems offer interesting potential for strengthening communities’ ownership over their own environmental data (data sovereignty). This presentation will discuss the potential of distributed database systems to address the current technological limitations of centralised systems for open data and citizen science-led data collection efforts and will give examples of use cases with currently available distributed database software platforms.

How to cite: Malard-Adam, J., Harms, J., and Medema, W.: Distributed databases for citizen science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5147, https://doi.org/10.5194/egusphere-egu22-5147, 2022.

EGU22-5571 | Presentations | ITS3.1/SSS1.2

RESECAN: citizen-driven seismology on an active volcano (Cumbre Vieja, La Palma Island, Canaries) 

Rubén García-Hernández, José Barrancos, Luca D'Auria, Vidal Domínguez, Arturo Montalvo, and Nemesio Pérez

During the last decades, countless seismic sensors have been deployed throughout the planet by different countries and institutions. In recent years, it has been possible to manufacture low-cost MEMS accelerometers thanks to nanotechnology and large-scale development. These devices can be easily configured and accurately synchronized by GPS. Customizable microcontrollers like Arduino or RaspBerryPI can be used to develop low-cost seismic stations capable of local data storage and real-time data transfer. Such stations have a sufficient signal quality to be used for complementing conventional seismic networks.

In recent years Instituto Volcanológico de Canarias (INVOLCAN) has developed a proprietary low-cost seismic station to implement the Canary Islands School Seismic Network (Red Sísmica Escolar Canaria - RESECAN) with multiple objectives:

  • supporting the teaching of geosciences.
  • promoting the scientific vocation.
  • strengthening the resilience of the local communities by improving awareness toward volcanism and the associated hazards.
  • Densifying the existing seismic networks.

On Sept. 19th 2021, a volcanic eruption started on the Cumbre Vieja volcano in La Palma. The eruption was proceeded and accompanied by thousands of earthquakes, many of them felt with intensities up to V MCS. Exploiting the attention drawn by the eruption, INVOLCAN started the deployment of low-cost seismic stations in La Palma in educational centres. In this preliminary phase, we selected five educational centres on the island.

The project's objective is to create and distribute low-cost stations in various educational institutions in La Palma and later on the whole Canary Islands Archipelago, supplementing them with educational material on the topics of seismology and volcanology. Each school will be able to access the data of its station, as well as those collected by other centres, being able to locate some of the recorded earthquakes. The data recorded by RESECAN will also be integrated into the broadband seismic network operated by INVOLCAN (Red Sísmica Canaria, C7). RESECAN will be an instrument of scientific utility capable of contributing effectively to the volcano monitoring of the Canary Islands, reinforcing its resilience with respect to future volcanic emergencies.

How to cite: García-Hernández, R., Barrancos, J., D'Auria, L., Domínguez, V., Montalvo, A., and Pérez, N.: RESECAN: citizen-driven seismology on an active volcano (Cumbre Vieja, La Palma Island, Canaries), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5571, https://doi.org/10.5194/egusphere-egu22-5571, 2022.

EGU22-6970 | Presentations | ITS3.1/SSS1.2

Analysis of individual learning outcomes of students and teachers in the citizen science project TeaTime4Schools 

Anna Wawra, Martin Scheuch, Bernhard Stürmer, and Taru Sanden

Only a few of the increasing number of citizen science projects set out to determine the projects impact on diverse learning outcomes of citizen scientists. However, besides pure completion of project activities and data collection, measurable benefits as individual learning outcomes (ILOs) (Phillips et al. 2014) should reward voluntary work.

Within the citizen science project „TeaTime4Schools“, Austrian students in the range of 13 to 18 years collected data as a group activity in a teacher guided school context; tea bags were buried into soil to investigate litter decomposition. In an online questionnaire a set of selected scales of ILOs (Phillips et al. 2014, Keleman-Finan et al. 2018, Wilde et al. 2009) were applied to test those ILOs of students who participated in TeaTime4Schools. Several indicators (scales for project-related response, interest in science, interest in soil, environmental activism, and self-efficacy) were specifically tailored from these evaluation frameworks to measure four main learning outcomes: interest, motivation, behavior, self-efficacy. In total, 106 valid replies of students were analyzed. In addition, 21 teachers who participated in TeaTime4Schools, answered a separate online questionnaire that directly asked about quality and liking of methods used in the project based on suggested scales about learning tasks of University College for Agricultural and Environmental Education (2015), which were modified for the purpose of this study. Findings of our research will be presented.

How to cite: Wawra, A., Scheuch, M., Stürmer, B., and Sanden, T.: Analysis of individual learning outcomes of students and teachers in the citizen science project TeaTime4Schools, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6970, https://doi.org/10.5194/egusphere-egu22-6970, 2022.

EGU22-7164 | Presentations | ITS3.1/SSS1.2

Seismic and air monitoring observatory for greater Beirut : a citizen observatory of the "urban health" of Beirut 

Cecile Cornou, Laurent Drapeau, Youssef El Bakouny, Samer Lahoud, Alain Polikovitch, Chadi Abdallah, Charbel Abou Chakra, Charbel Afif, Ahmad Al Bitar, Stephane Cartier, Pascal Fanice, Johnny Fenianos, Bertrand Guillier, Carla Khater, and Gabriel Khoury and the SMOAG Team

Already sensitive because of its geology (seismic-tsunamic risk) and its interface between arid and temperate ecosystems, the Mediterranean Basin is being transformed by climate change and major urban pressure on resources and spaces. Lebanon concentrates on a small territory the environmental, climatic, health, social and political crises of the Middle East: shortages and degradation of surface and groundwater quality, air pollution, landscape fragmentation, destruction of ecosystems, erosion of biodiversity, telluric risks and very few mechanisms of information, prevention and protection against these vulnerabilities. Further, Lebanon is sorely lacking in environmental data at sufficient temporal and spatial scales to cover the range of key phenomena and to allow the integration of environmental issues for the country's development. This absence was sadly illustrated during the August 4th, 2020, explosion at the port of Beirut, which hindered the effective management of induced threats to protect the inhabitants. In this degraded context combined with a systemic crisis situation in Lebanon, frugal  innovation is more than an option, it is a necessity. Initiated in 2021 within the framework of the O-LIFE lebanese-french research consortium (www.o-life.org), the « Seismic and air monitoring observatory  for greater Beirut » (SMOAG) project aims at setting up a citizen observatory of the urban health of Beirut by deploying innovative, connected, low-cost, energy-efficient and robust environmental and seismological instruments. Through co-constructed web services and mobile applications with various stakeholders (citizens, NGOs, decision makers and scientists), the SMOAG citizen observatory will contribute to the information and mobilization of Lebanese citizens and managers by sharing the monitoring of key indicators associated with air quality, heat islands and building stability, essential issues for a sustainable Beirut.

The first phase of the project was dedicated to the development of a low-cost environmental sensor enabling pollution and urban weather measurements (particle matters, SO2, CO, O3, N02, solar radiation, wind speed, temperature, humidity, rainfall) and to the development of all the software infrastructure, from data acquisition to the synoptic indicators accessible via web and mobile application, while following the standards of the Sensor Web Enablement and Sensor Observation System of the OGC and to the FAIR principles (Easy to find, Accessible, Interoperable, Reusable). A website and Android/IOS applications for the restitution of data and indicators and a dashboard allowing real time access to data have been developed. Environmental and low-cost seismological stations (Raspberry Shake) have been already deployed in Beirut, most of them hosted by Lebanese citizens. These instrumental and open data access efforts were completed by participatory workshops with various stakeholders  to improve the ergonomy of the web and application interfaces and to define roadmap for the implantation of future stations, consistently with  most vulnerable populations identified by NGOs and the current knowledge on the air pollution and heat islands in Beirut.

How to cite: Cornou, C., Drapeau, L., El Bakouny, Y., Lahoud, S., Polikovitch, A., Abdallah, C., Abou Chakra, C., Afif, C., Al Bitar, A., Cartier, S., Fanice, P., Fenianos, J., Guillier, B., Khater, C., and Khoury, G. and the SMOAG Team: Seismic and air monitoring observatory for greater Beirut : a citizen observatory of the "urban health" of Beirut, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7164, https://doi.org/10.5194/egusphere-egu22-7164, 2022.

EGU22-7323 | Presentations | ITS3.1/SSS1.2

Citizen science for better water quality management in the Brantas catchment, Indonesia? Preliminary results 

Reza Pramana, Schuyler Houser, Daru Rini, and Maurits Ertsen

Water quality in the rivers and tributaries of the Brantas catchment (about 12.000 km2) is deteriorating due to various reasons, including rapid economic development, insufficient domestic water treatment and waste management, and industrial pollution. Various water quality parameters are at least measured on monthly basis by agencies involved in water resource development and management. However, measurements consistently demonstrate exceedance of the local water quality standards. Recent claims presented by the local Environmental Protection Agency indicate that the water quality is much more affected by the domestic sources compared to the others. In an attempt to examine this, we proposed a citizen science campaign by involving people from seven communities living close to the river, a network organisation that works on water quality monitoring, three government agencies, and students from a local university. Beginning in 2022, we kicked off our campaign by measuring with test strips for nitrate, nitrite, and phosphate on weekly basis at twelve different locations from upstream to downstream of the catchment. In the effort to provide education on water stewardship and empower citizens to participate in water quality management, preliminary results – the test strips, strategies, and challenges - will be shown.

How to cite: Pramana, R., Houser, S., Rini, D., and Ertsen, M.: Citizen science for better water quality management in the Brantas catchment, Indonesia? Preliminary results, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7323, https://doi.org/10.5194/egusphere-egu22-7323, 2022.

EGU22-7916 | Presentations | ITS3.1/SSS1.2

Citizen science - an invaluable tool for obtaining high-resolution spatial and temporal meteorological data 

Jadranka Sepic, Jure Vranic, Ivica Aviani, Drago Milanovic, and Miro Burazer

Available quality-checked institutional meteorological data is often not measured at locations of particular interest for observing specific small-scale and meso-scale atmospheric processes. Similarly, institutional data can be hard to obtain due to data policy restrictions. On the other hand, a lot of people are highly interested in meteorology, and they frequently deploy meteorological instruments at locations where they live. Such citizen data are often shared through public data repositories and websites with sophisticated visualization routines.  As a result, the networks of citizen meteorological stations are, in numerous areas, denser and more easily accessible than are the institutional meteorological networks.  

Several examples of publicly available citizen meteorological networks, including school networks, are explored – and their application to published high-quality scientific papers is discussed. It is shown that for the data-based analysis of specific atmospheric processes of interest, such as mesoscale convective disturbances and mesoscale atmospheric gravity waves, the best qualitative and quantitative results are often obtained using densely populated citizen networks.  

Finally, a “cheap and easy to do” project of constructing a meteorological station with a variable number of atmospheric sensors is presented. Suggestions on how to use such stations in educational and citizen science activities, and even in real-time warning systems, are given.  

How to cite: Sepic, J., Vranic, J., Aviani, I., Milanovic, D., and Burazer, M.: Citizen science - an invaluable tool for obtaining high-resolution spatial and temporal meteorological data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7916, https://doi.org/10.5194/egusphere-egu22-7916, 2022.

Among the greatest constraints to accurately monitoring and understanding climate and climate change in many locations is limited in situ observing capacity and resolution in these places. Climate behaviours along with dependent environmental and societal processes are frequently highly localized, while observing systems in the region may be separated by hundreds of kilometers and may not adequately represent conditions between them. Similarly, generating climate equity in urban regions can be hindered by an inability to resolve urban heat islands at neighborhood scales. In both cases, higher density observations are necessary for accurate condition monitoring, research, and for the calibration and validation of remote sensing products and predictive models. Coincidentally, urban neighborhoods are heavily populated and thousands of individuals visit remote locations each day for recreational purposes. Many of these individuals are concerned about climate change and are keen to contribute to climate solutions. However, there are several challenges to creating a voluntary citizen science climate observing program that addresses these opportunities. The first is that such a program has the potential for limited uptake if participants are required to volunteer their time or incur a significant cost to participate. The second is that researchers and decision-makers may be reluctant to use the collected data owing to concern over observer bias. This paper describes the on-going development and implementation by 2DegreesC.org of a technology-driven citizen science approach in which participants are equipped with low-cost automated sensors that systematically sample and communicate scientifically valid climate observations while they focus on other activities (e.g., recreation, gardening, fitness). Observations are acquired by a cloud-based system that quality controls, anonymizes, and makes them openly available. Simultaneously, individuals of all backgrounds who share a love of the outdoors become engaged in the scientific process via data-driven communication, research, and educational interactions. Because costs and training are minimized as barriers to participation, data collection is opportunistic, and the technology can be used almost anywhere, this approach is dynamically scalable with the potential for millions of participants to collect billions of new, accurate observations that integrate with and enhance existing observational network capacity.

How to cite: Shein, K.: Linking citizen scientists with technology to reduce climate data gaps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10634, https://doi.org/10.5194/egusphere-egu22-10634, 2022.

The 2019-2020 bushfire season (the Black Summer) in Australia was unprecedented in its breadth and severity as well as the disrupted resources and time dedicated to studying it.  Right after one of the most extreme fire seasons on record had hit Australia, a once-in-a-century global pandemic, COVID-19, occurred. This pandemic caused world-wide lockdowns throughout 2020 and 2021 that prevented travel and field work, thus hindering researchers from assessing damage done by the Black Summer bushfires. Early assessments show that the bushfires on Kangaroo Island, South Australia caused declines in soil nutrients and ground coverage up to 10 months post-fire, indicating higher risk of soil erosion and fire-induced land degradation at this location. In parallel to the direct impacts the Black Summer bushfires had on native vegetation and soil, the New South Wales Nature Conservation Council observed a noticeable increase in demand for fire management workshops in 2020. What was observed of fires and post-fire outcomes on soil and vegetation from the 2019-2020 bushfire season that drove so many citizens into action? In collaboration with the New South Wales Nature Conservation Council and Rural Fire Service through the Hotspots Fire Project, we will be surveying and interviewing landowners across New South Wales to collect their observations and insights regarding the Black Summer. By engaging landowners, this project aims to answer the following: within New South Wales, Australia, what impact did the 2019-2020 fire season have on a) soil health and native vegetation and b) human behaviours and perceptions of fire in the Australian landscape. The quantity of insights gained from NSW citizens will provide a broad assessment of fire impacts across multiple soil and ecosystem types, providing knowledge of the impacts of severe fires, such as those that occurred during the Black Summer, to the scientific community. Furthermore, with knowledge gained from reflections from citizens, the Hotspots Fire Project will be better able to train and support workshop participants, while expanding the coverage of workshops to improve support of landowners across the state. Data regarding fire impacts on soil, ecosystems, and communities has been collected by unknowing citizen scientists all across New South Wales, and to gain access to that data, we need only ask.

How to cite: Ondik, M., Ooi, M., and Muñoz-Rojas, M.: Insights from landowners on Australia's Black Summer bushfires: impacts on soil and vegetation, perceptions, and behaviours, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10776, https://doi.org/10.5194/egusphere-egu22-10776, 2022.

High air pollution concentration levels and increased urban heat island intensity, are amongst the most critical contemporary urban health concerns. This is the reason why various municipalities are starting to invest in extensive direct air quality and microclimate sensing networks. Through the study of these datasets it has become evident that the understanding of inter-urban environmental gradients is imperative to effectively introduce urban land-use strategies to improve the environmental conditions in the neighborhoods that suffer the most, and develop city-scale urban planning solutions for a better urban health.  However, given economic limitations or divergent political views, extensive direct sensing environmental networks have yet not been implemented in most cities. While the validity of citizen science environmental datasets is often questioned given that they rely on low-cost sensing technologies and fail to incorporate sensor calibration protocols, they can offer an alternative to municipal sensing networks if the necessary Quality Assurance / Quality Control (QA/QC) protocols are put in place.

This research has focused on the development of a QA/QC protocol for the study of urban environmental data collected by the citizen science PurpleAir initiative implemented in the Bay Area and the city of Los Angeles where over 700 purple air stations have been implemented in the last years. Following the QA/QC process the PurpleAir data was studied in combination with remote sensing datasets on land surface temperature and normalized difference vegetation index, and geospatial datasets on socio-demographic and urban fabric parameters. Through a footprint-based study, and for all PurpleAir station locations, the featured variables and the buffer sizes with higher correlations have been identified to compute the inter-urban environmental gradient predictions making use of 3 supervised machine learning models: - Regression Tree Ensemble, Support Vector Machine, and a Gaussian Process Regression.

How to cite: Llaguno-Munitxa, M., Bou-Zeid, E., Rueda, P., and Shu, X.: Citizen-science urban environmental monitoring for the development of an inter-urban environmental prediction model for the city of Los Angeles, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11765, https://doi.org/10.5194/egusphere-egu22-11765, 2022.

EGU22-11797 | Presentations | ITS3.1/SSS1.2

Attitudes towards a cafetiere-style filter system and paper-based analysis pad for soil nutrition surveillance in-situ: evidence from Kenya and Vietnam 

Samantha Richardson, Philip Kamau, Katie J Parsons, Florence Halstead, Ibrahim Ndirangu, Vo Quang Minh, Van Pham Dang Tri, Hue Le, Nicole Pamme, and Jesse Gitaka

Routine monitoring of soil chemistry is needed for effective crop management since a poor understanding of nutrient levels affects crop yields and ultimately farmers’ livelihoods.1 In low- and middle-income countries soil sampling is usually limited, due to required access to analytical services and high costs of portable sampling equipment.2 We are developing portable and low-cost sampling and analysis tools which would enable farmers to test their own land and make informed decisions around the need for fertilizers. In this study we aimed to understand attitudes of key stakeholders towards this technology and towards collecting the data gathered on public databases which could inform decisions at government level to better manage agriculture across a country.

 

In Kenya, we surveyed 549 stakeholders from Murang’a and Kiambu counties, 77% men and 23% women. 17.2% of these respondent smallholder farmers were youthful farmers aged 18-35 years with 81.9% male and 18.1% female-headed farming enterprises. The survey covered current knowledge of soil nutrition, existing soil management practices, desire to sample soil in the future, attitudes towards our developed prototypes, motivation towards democratization of soil data, and willingness to pay for the technology. In Vietnam a smaller mixed methods online survey was distributed via national farming unions to 27 stakeholders, in particular engaging younger farmers with an interest in technology and innovation.

Within the Kenya cohort, only 1.5% of farmers currently test for nutrients and pH. Reasons given for not testing included a lack of knowledge about soil testing (35%), distance to testing centers (34%) and high costs (16%). However, 97% of respondents were interested in soil sampling at least once a year, particularly monitoring nitrates and phosphates. Nearly all participants, 94-99% among the males/females/youths found cost of repeated analysis of soil samples costing around USD 11-12 as affordable for their business. Regarding sharing the collecting data, 88% believed this would be beneficial, for example citing that data shared with intervention agencies and agricultural officers could help them receive relevant advice.

In Vietnam, 87% of famers did not have their soil nutrient levels tested with 62% saying they did not know how and 28% indicating prohibitive costs. Most currently relied on local knowledge and observations to improve their soil quality. 87% thought that the system we were proposing was affordable with only 6% saying they would not be interested in trialing this new technology. Regarding the soil data, respondents felt that it should be open access and available to everyone.

Our surveys confirmed the need and perceived benefit for our proposed simple-to-operate and cost-effective workflow, which would enable farmers to test soil chemistry themselves on their own land. Farmers were also found to be motivated towards sharing their soil data to get advice from government agencies. The survey results will inform our further development of low-cost, portable analytical tools for simple on-site measurements of nutrient levels within soil.

 

1. Dimkpa, C., et al., Sustainable Agriculture Reviews, 2017, 25, 1-43.

2. Zingore, S., et al., Better Crops, 2015, 99 (1), 24-26.

How to cite: Richardson, S., Kamau, P., Parsons, K. J., Halstead, F., Ndirangu, I., Minh, V. Q., Tri, V. P. D., Le, H., Pamme, N., and Gitaka, J.: Attitudes towards a cafetiere-style filter system and paper-based analysis pad for soil nutrition surveillance in-situ: evidence from Kenya and Vietnam, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11797, https://doi.org/10.5194/egusphere-egu22-11797, 2022.

Keywords: preconcentration, heavy metal, cafetiere, citizen science, paper-based microfluidics

Heavy-metal analysis of water samples using microfluidics paper-based analytical devices (µPAD) with colourimetric readout is of great interest due to its simplicity, affordability and potential for Citizen Science-based data collection [1]. However, this approach is limited by the relatively poor sensitivity of the colourimetric substrates, typically achieving detection within the mg L-1 range, whereas heavy-metals exist in the environment at <μg L-1 quantities   [2]. Preconcentration is commonly used when analyte concentration is below the analytical range, but this typically requires laboratory equipment and expert users [3]. Here, we are developing a simple method for pre-concentration of heavy metals, to be integrated with a µPAD workflow that would allow Citizen Scientists to carry out pre-concentration as well as readout on-site.

The filter mesh from an off-the-shelf cafetière (350 mL) was replaced with a custom-made bead carrier basket, laser cut in PMMA sheet featuring >500 evenly spread 100 µm diameter holes. This allowed the water sample to pass through the basket and mix efficiently with the 2.6 g ion-exchange resin beads housed within (Lewatit® TP207, Ambersep® M4195, Lewatit® MonoPlus SP 112). An aqueous Ni2+ sample (0.3 mg L-1, 300 mL) was placed in the cafetiere and the basket containing ion exchange material was moved up and down for 5 min to allow Ni2+ adsorption onto the resin. Initial investigations into elution with a safe, non-toxic eluent focused on using NaCl (5 M). These were carried out by placing the elution solution into a shallow dish and into which the the resin containing carrier basket was submerging. UV/vis spectroscopy via a colourimetric reaction with nioxime was used to monitor Ni2+ absorption and elution.

After 5 min of mixing it was found that Lewatit® TP207 and Ambersep® M4195 resins adsorbed up to 90% of the Ni2+ ions present in solution and the Lewatit® MonoPlus SP 112 adsorbed up to 60%. However, the Lewatit® MonoPlus SP 112 resin performed better for elution with NaCl. Initial studies showed up to 30% of the Ni2+ was eluted within only 1 min of mixing with 10 mL 5 M NaCl.

Using a cafetière as pre-concentration vessel coupled with non-hazardous reagents in the pre-concentration process allows involvement of citizen scientists in more advanced environmental monitoring activities that cannot be achieved with a simple paper-based sensor alone. Future work will investigate the user-friendliness of the design by trialling the system with volunteers and will aim to further improve the trapping and elution efficiencies.

 

References:

  • Almeida, M., et al., Talanta, 2018, 177, 176-190.
  • Lace, A., J. Cleary, Chemosens., 2021. 9, 60.
  • Alahmad, W., et al.. Biosens. Bioelectron., 2021. 194, 113574.

 

How to cite: Sari, M., Richardson, S., Mayes, W., Lorch, M., and Pamme, N.: Method development for on-site freshwater analysis with pre-concentration of nickel via ion-exchange resins embedded in a cafetière system and paper-based analytical devices for readout, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11892, https://doi.org/10.5194/egusphere-egu22-11892, 2022.

EGU22-12972 | Presentations | ITS3.1/SSS1.2 | Highlight

Collection of valuable polar data and increase in nature awareness among travellers by using Expedition Cruise Ships as platforms of opportunity 

Verena Meraldi, Tudor Morgan, Amanda Lynnes, and Ylva Grams

Hurtigruten Expeditions, a member of the International Association of Antarctica Tour Operators (IAATO) and the Association of Arctic Expedition Cruise Operators (AECO) has been visiting the fragile polar environments for two decades, witnessing the effects of climate change. Tourism and the number of ships in the polar regions has grown significantly. As a stakeholder aware of the need for long-term protection of these regions, we promote safe and environmentally responsible operations, invest in the understanding and conservation of the areas we visit, and focus on the enrichment of our guests.

For the last couple of years, we have supported the scientific community by transporting researchers and their equipment to and from their study areas in polar regions and we have established collaborations with numerous scientific institutions. In parallel we developed our science program with the goal of educating our guests about the natural environments they are in, as well as to further support the scientific community by providing our ships as platforms of opportunity for spatial and temporal data collection. Participation in Citizen Science programs that complement our lecture program provides an additional education opportunity for guests to better understand the challenges the visited environment faces while contributing to filling scientific knowledge gaps in remote areas and providing data for evidence-based decision making.

We aim to continue working alongside the scientific community and developing partnerships. We believe that scientific research and monitoring in the Arctic and Antarctic can hugely benefit from the reoccurring presence of our vessels in these areas, as shown by the many projects we have supported so far. In addition, our partnership with the Polar Citizen Science Collective, a charity that facilitates interaction between scientists running Citizen Science projects and expedition tour operators, will allow the development of programs on an industry level, rather than just an operator level, increasing the availability and choice of platforms of opportunity for the scientific community.

How to cite: Meraldi, V., Morgan, T., Lynnes, A., and Grams, Y.: Collection of valuable polar data and increase in nature awareness among travellers by using Expedition Cruise Ships as platforms of opportunity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12972, https://doi.org/10.5194/egusphere-egu22-12972, 2022.

EGU22-13115 | Presentations | ITS3.1/SSS1.2

Participatory rainfall monitoring: strengthening hydrometeorological risk management and community resilience in Peru 

Miguel Arestegui, Miluska Ordoñez, Abel Cisneros, Giorgio Madueño, Cinthia Almeida, Vannia Aliaga, Nelson Quispe, Carlos Millán, Waldo Lavado, Samuel Huaman, and Jeremy Phillips

Heavy rainfall, floods and debris flow on the Rimac river watershed are recurring events that impact Peruvian people in vulnerable situations.There are few historical records, in terms of hydrometeorological variables, with sufficient temporal and spatial accuracy. As a result, Early Warning Systems (EWS) efficiency, dealing with these hazards, is critically limited.

In order to tackle this challenge, among other objectives, the Participatory Monitoring Network (Red de Monitoreo Participativo or Red MoP, in spanish) was formed: an alternative monitoring system supported by voluntary community collaboration of local population under a citizen science approach. This network collects and communicates data captured with standardized manual rain gauges (< 3USD). So far, it covers districts in the east metropolitan area of the capital city of Lima, on dense peri-urban areas, districts on the upper Rimac watershed on rural towns, and expanding to other upper watersheds as well.

Initially led by Practical Action as part of the Zurich Flood Resilience Alliance, it is now also supported by SENAMHI (National Meteorological and Hydrological Service) and INICTEL-UNI (National Telecommunications Research and Training Institute), as an activity of the National EWS Network (RNAT).

For the 2019-2022 rainfall seasons, the network has been gathering data and information from around 80 volunteers located throughout the Rimac and Chillon river watersheds (community members, local governments officers, among others): precipitation, other meteorological variables, and information regarding the occurrence of events such as floods and debris flow (locally known as huaycos). SENAMHI has provided a focalized 24h forecast for the area covered by the volunteers, experimentally combines official stations data with the network’s for spatial analysis of rainfall, and, with researchers from the University of Bristol, analyses potential uses of events gathered through this network. In order to facilitate and automatize certain processes, INICTEL-UNI developed a web-platform and a mobile application that is being piloted.

We present an analysis of events and trends gathered through this initiative (such as a debris flow occurred in 2019). Specifically, hotspots and potential uses of this sort of refined spatialized rainfall information in the dry & tropical Andes. As well, we present a qualitative analysis of volunteers’ expectations and perceptions. Finally, we also present a meteorological explanation of selected events, supporting the importance of measuring localized precipitation during the occurrence of extreme events in similar complex, physical and social contexts.

How to cite: Arestegui, M., Ordoñez, M., Cisneros, A., Madueño, G., Almeida, C., Aliaga, V., Quispe, N., Millán, C., Lavado, W., Huaman, S., and Phillips, J.: Participatory rainfall monitoring: strengthening hydrometeorological risk management and community resilience in Peru, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13115, https://doi.org/10.5194/egusphere-egu22-13115, 2022.

EGU22-2135 | Presentations | ITS3.2/HS1.1.8

Assessing the groundwater sustainability of Bengaluru megacity, India, through the lens of socio-hydrogeology 

Tejas Kulkarni, Matthias Gassmann, Chandrakanth Kulkarni, Vijayalaxmi Khed, and Andreas Buerkert

Water extraction in Bengaluru, India's fastest expanding metropolis, entirely depends on its ~500000 wells in a crystalline rock aquifer, of which an unknown number has been abandoned and the level of others has sunk to depths of 450 meters below surface. Recent research has highlighted the spatial heterogeneity and questioned the reliability of water level data in these settings. To fill existing knowledge gaps on the likely over-extraction of groundwater as a vital resource we used a socio-hydrogeological approach of front-lining local hydrogeologists to collect primary data on the spatio-temporal evolution of well depths across the city. Our data show that over the past 60 years borewell depth has increased significantly while water yields have remained unchanged, indicating that digging deeper wells is unsustainable. Using camera inspections of 56 wells in a 2.1km2 catchment of industrial land use in Electronic City of Bengaluru, we noted that water levels in the wells are largely determined by rock fractures, not by well depth. Our data show that increased borewell depths is a good signal of declining water levels in Bengaluru’s aquifers. Analysis of δ18O and δ2H signatures of groundwater samples across all depths followed the local meteoric water line indicating recent recharge, implying that drilling deeper only increased the borehole volume and did not tap into newer water sources.

How to cite: Kulkarni, T., Gassmann, M., Kulkarni, C., Khed, V., and Buerkert, A.: Assessing the groundwater sustainability of Bengaluru megacity, India, through the lens of socio-hydrogeology, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2135, https://doi.org/10.5194/egusphere-egu22-2135, 2022.

EGU22-3412 | Presentations | ITS3.2/HS1.1.8

Socio-hydrogeological approach to identify contaminant fluxes towards groundwater-dependent hydrosystems, case of the Biguglia lagoon (Corsica, France) 

Eléa Crayol, Frédéric Huneau, Emilie Garel, Viviana Re, Alexandra Mattei, Sébastien Santoni, and Vanina Pasqualini

Coastal Mediterranean lagoons are very often groundwater-dependent hydrosystems, however their hydrogeological functioning is poorly known, damaging their management. Socio-hydrogeology allows, in an inter-and transdisciplinary way to clarify the relationships linking human activities and groundwater status. Those interactions within the watershed, combined with consumption patterns of the population, and sanitation defects can generate processes leading to pollutant fluxes with impacts on surface water, groundwater and lagoon water quality. This approach integrates both social and economic components into hydrogeological investigations.

The Biguglia lagoon watershed (Northern Corsica, France) has been chosen as a pilot site. Indeed, significant nitrate content, emerging compounds, and pesticides have already been observed in the lagoon waters, but their origin still needs to be specified, both in terms of source and dispersion modalities.

The aim of this study is to (1) assess the link between groundwater quality and the anthropogenic pressures on the watershed, (2) understand water users’ and the stakeholders ‘perception and knowledge of the watershed and the local territory, (3) identify the origin of pollutions detected in the lagoon’s water.

In this purpose, a field sampling was led in spring 2021, combining several tools useful for the knowledge improvement of the hydrogeological functioning and the tracing of anthropic pollutant fluxes. Investigations with structured interviews was administered to 32 water users and 16 local stakeholders involved in the monitoring assessment, to determine the land use evolution since 1950’s to present and aiming at identifying past and present uses of the water resource over the watershed. At the same time, a multi-tracer water sampling, combining physico-chemical parameters, major ions and trace elements as well as, stable isotopes of the water molecule was carried out on 53 points (lagoon, rivers, canals waters, groundwater), of which 21 samples were also analysed for a set of pesticides (screening of 240 molecules).

Pesticide’s analysis show that the study site is affected by agricultural pollution. Indeed, neonicotinoid insecticides, extensively used worldwide, have been found on the sampling points with significant concentrations. Those pesticides are mainly used in fruit, vegetable and cereal crops. The field survey, the questionnaire and the sampling campaign have allowed to identify and confirm the presence of these cultures on the study site. In the same way, benzotriazoles, perfluorinated acids (PFAs) and DEET (insect repellent) have also been detected. They are related to the consumption habits of the population on the watershed.

Geochemical analysis correlated with the social analysis and the land use analysis permitted to better constraint pollution sources, evidencing two main sources: sanitation defect and agriculture activity.

The socio-hydrogeological approach is essential to improve the knowledge of the Biguglia lagoon hydrosystem. The purpose of this work is to offer a new functional diagram of the area, including the space-time continuum of anthropogenic impacts within the watershed. This new knowledge will help local stakeholders towards the recovery of a good geochemical and ecological status for the lagoon brackish water body of Biguglia.

How to cite: Crayol, E., Huneau, F., Garel, E., Re, V., Mattei, A., Santoni, S., and Pasqualini, V.: Socio-hydrogeological approach to identify contaminant fluxes towards groundwater-dependent hydrosystems, case of the Biguglia lagoon (Corsica, France), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3412, https://doi.org/10.5194/egusphere-egu22-3412, 2022.

Developing green infrastructures (GIs) for rainwater harvesting has prevailed in many arid regions, which requires a new water management framework. This paper focuses on a water policy - water trading scheme – design and analysis for integrated green infrastructures and water resource management in a watershed that consists of multiple urban areas. A multiagent model bringing together urban water management and GIs planning models for multiple water managers with hydrological models is proposed to show 1) what the optimized water trading scheme is, 2) how the scheme would affect watershed socio-hydrologic environments, and 3) what the role of GIs in the scheme is. In the model, the water trading scheme design depends not only on the hydrologic dynamics of watershed caused by GIs and but on the social interactions between watershed and multiple urban managers. The proposed model is applied to the Colorado River Lower Basin, which is one of the USA's aridest regions and is planning water trading. Results indicated that a water-trading scheme effectively allocates limited water resources with a minimized system cost in the study area. Results also show that developing GIs to use rainwater resources might further reduce the cost induced by the water trading scheme. However, it might also exacerbate water resource allocation inequity among water users. These findings can help decision-makers design the associated water policy to support sustainable watershed development in arid regions.

How to cite: Zhang, M. and Chui, T. F. M.: Modeling water trading to support integrated green infrastructure and water resources management in an arid watershed, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3487, https://doi.org/10.5194/egusphere-egu22-3487, 2022.

EGU22-4065 | Presentations | ITS3.2/HS1.1.8

Combining groundwater numerical modelling and social sciences to assess water access in developing countries rural environments 

Daniela Cid Escobar, Albert Folch, Nuria Ferrer, and Xavier Sanchez-Vila

Shallow groundwater is usually more accessible than surface water in remote and rural areas due to the infrastructure cost to collect and allocate surface water on dispersed communities. However, the absence of a proper hydrogeological characterization of the aquifer system added to the lack of groundwater infrastructure and maintenance, technical capacity, and governance has not allowed the development of sustainable use of local groundwater resources in different territories worldwide.

We propose an interdisciplinary approach to determine the risk of a household experiencing water shortage due to depletion of the aquifer, degradation of the water quality, not access to the water point, or sustainable functionality. Three main parameters were defined: Closeness (determined by geographical parameters and easily computed using GIS), Availability (determined by hydrogeological parameters that can be assessed from a groundwater model), and Sustainability (differentiating between software functionality and hardware functionality (Bonsor, MacDonald, Casey, Carter, & Wilson, 2018), the former analyzed through Multiple Factor Analysis. Each of these three factors range between 0 and 1, and their product provides an index that can be used to map the risk of individual households.

An application case in Kwale County, southeast coast of Kenya, is presented, where community handpumps are the main water supply system. The novelty of the index relies on the combination of groundwater model outputs with household data, which allows the generation of time-dependent risk indexes that can be calculated for several scenarios depending on the data available. In this case, we present three scenarios, one involving the potential malfunctioning of a percentage of the existing handpumps, and two other ones dealing with extreme climate scenarios, all of them designed to test the resilience and applicability of the proposed index and their applicability for decision making.

Acknowledgements: This work was funded by the Centre of Cooperation for Development of the Universitat Politècnica de Catalunya. We want to thank UPGRO and Gro For Good projects for their support and collaboration in acquiring available data.

References: Bonsor, H., MacDonald, A., Casey, V., Carter, R., & Wilson, P. (2018). The need for a standard approach to assessing the functionality of rural community water supplies. Hydrogeology Journal, 26(2), 367–370. https://doi.org/10.1007/s10040-017-1711-0

How to cite: Cid Escobar, D., Folch, A., Ferrer, N., and Sanchez-Vila, X.: Combining groundwater numerical modelling and social sciences to assess water access in developing countries rural environments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4065, https://doi.org/10.5194/egusphere-egu22-4065, 2022.

EGU22-4383 | Presentations | ITS3.2/HS1.1.8

Insights from a transdisciplinary approach for water quality monitoring and multi-stakeholder management in the island of Santa Cruz, Galápagos (Ecuador) 

Chiara Tringali, Jonathan Rizzi, Viviana Re, Caterina Tuci, Marta Mancin, Edison Mendieta, and Antonio Marcomini

The Galápagos Archipelago (Ecuador) is traditionally considered a living museum and showcase of evolution. The rich biodiversity and distinctive environment attract thousands of visitors every year. However, this tourist flow exerts continuous pressures on the natural environment, and on water resources in particular, to the detriment of the local population who is faced with the challenges of accessing safe and sustainable drinking resources.

For this reason, over the years numerous projects, especially in the context of international cooperation activities, have tried to assess the impact of anthropogenic activities on the water quality and quantity in the islands. Unfortunately, the lack of coordination among all these projects did not allow to carry out continuous monitoring and, above all, to obtain homogenous and consistent time series of the measured hydrogeochemical parameters.

For this reason, in the framework of a joint technical cooperation project (“Health protection and prevention of anthropic pollution risks” in the Island of Santa Cruz” financed by Veneto Region, Italy; CS2012A19) a comprehensive assessment on water quality data (physico-chemical parameters, major elements, trace elements and coliforms) collected since 1985 in the Santa Cruz Island was performed. Results revealed the need of optimizing monitoring efforts to fill knowledge gaps and to better target decision making processes. All data were therefore standardized, homogenized and collected in an open database, accessible to all water stakeholders involved in water control, management and protection in the island. 

The information gathering activity also revealed the lack of coordination between the stakeholders themselves and the presence overlapping interests towards water resources, which represent an obstacle for coordinated actions targeted to sustainable water resources management in such a fragile environment. 

Therefore, under the guidance of the Santa Cruz Municipality, a Water Committee was established to foster the coordinated action among the water stakeholders in the island. The latter range from national to local authorities (e.g. National Water Secretariat, Ministry of Agriculture, Ecuador Naval Oceanographic Institute, National Park Galapagos, Municipality), research institutes (Charles Darwin Foundation), bottled water companies and Santa Cruz Households. Within the committee, shared procedures for data collection, sample analysis, evaluation and data assessment by an open access geodatabase were agreed collectively and tested in the field. Joint monitoring in the island can optimize the efforts for water quality assessment and protection, and improve accountability and outreach towards civil society and water users. Such a coordinated action can also ensure that international cooperation activities carried out in the island will respond to the real needs of the local population, and results will contribute to the long-term protection of the scarce water resources in the island.

Overall, results of the project revealed the high potential of adopting transdisciplinary approaches in complex, multi-stakeholder, framework typical of small island states.

How to cite: Tringali, C., Rizzi, J., Re, V., Tuci, C., Mancin, M., Mendieta, E., and Marcomini, A.: Insights from a transdisciplinary approach for water quality monitoring and multi-stakeholder management in the island of Santa Cruz, Galápagos (Ecuador), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4383, https://doi.org/10.5194/egusphere-egu22-4383, 2022.

EGU22-7114 | Presentations | ITS3.2/HS1.1.8

Integration of hydrogeology and social sciences in practice, two IWRM case studies with challenges and opportunities from semi-arid Africa 

Anne Van der Heijden, Maarten J. Waterloo, Anouk I. Gevaert, and Daniela Benedicto van Dalen

Groundwater resources in African drylands are important sources of freshwater but are under pressure due to population growth and climate change. It is therefore increasingly important that groundwater resources are managed in a sustainable way. Development of IWRM plans are ongoing in (semi-)arid African countries with support from national governments, NGOs and consultancies. This presentation aims to highlight two case studies in which bio-geophysical and socio-economic data were combined to assist in the Integrated Water Resources Management (IWRM) process: 1) catchment-scale Water Infrastructure Assessment (WIA) in Sudan and 2) assessment of pathways towards sustainable groundwater use in African drylands. Per case study lessons learned and recommended approaches are provided.

In IWRM intervention planning for semi-arid regions a local increase in available water resources is sought after, which can be found in the better use of excess runoff. A balance between water demand and water resources on community level is key and a prerequisite for implementing durable and inclusive interventions that last. The IWRM process starts with a strong knowledge base. In practice, however, the development of a good knowledge base is not simple. Challenges arise in collecting, processing, and mapping results. With hydrogeology, a 3D situation is translated to 2D maps. Socio-economic data are often stored based on administrative boundaries and need corrections for hydrological source-area delineation and seasonal and interannual variations. Population density and water demand change over seasons, following crop cycles and livestock migration patterns. Looking at local water availability, rainfall and surface water flows are becoming more variable and less reliable. Therefore, assessment of the rainfall regime and corresponding behaviour and movements of people and livestock is key. For WIAs, yields and usage are often averaged, thus disregarding seasonal changes, even though shallow wells and reservoirs regularly become depleted outside the rainy season. The Sudan case study presents an improved approach for a WIA, that is adaptable and can be applied in semi-arid environments in Africa and elsewhere, in which seasonality and socio-economic dynamics were taken into account.

Both hydrogeologic and socio-economic conditions tend to be quite location-specific. This makes developing a simple blueprint for integrated groundwater management impossible. However, by translating local conditions into regional advice, strategic pathways were developed for the drylands of Africa[1] to support IWRM. The zonal hydrogeological and socio-economic setting determined the main groundwater issues and the potential sustainability strategies. The sustainability pathways describe potential sets of strategies that can be effective in moving towards sustainable groundwater resources development and use. While these pathways provide insight into regional differences within the African drylands, these cannot be used at local scales. Tailor-made approaches are necessary. In these assessments, remote sensing provides opportunities. Gridded datasets of population density are of great value in water demand assessments on a larger scale. Participatory stakeholder processes also provide opportunities, including group interviews for development of community calendars providing useful information on the occurrence and frequency of natural hazards and water demand.

[1] Gevaert et al. 2020, Towards sustainable groundwater use in the African drylands

 

 

How to cite: Van der Heijden, A., Waterloo, M. J., Gevaert, A. I., and Benedicto van Dalen, D.: Integration of hydrogeology and social sciences in practice, two IWRM case studies with challenges and opportunities from semi-arid Africa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7114, https://doi.org/10.5194/egusphere-egu22-7114, 2022.

EGU22-10542 | Presentations | ITS3.2/HS1.1.8

From Coarse Resolution to Realistic Resolution: GRACE as a Science Communication and Policymaking Tool for Sustainable Groundwater Management 

Li Xu, James S. Famiglietti, David Ferris, Xander Huggins, Chinchu Mohan, Sara Sadri, Palash Sanyal, and Jefferson S. Wong

Managing groundwater resources is challenging because they are difficult to monitor. The application of remote sensing methods has improved our capacity to monitor variability in groundwater storage, as is the case for the Gravity Recovery and Climate Experiment (GRACE) and the GRACE Follow-On (GRACE-FO) missions. While GRACE-based groundwater studies to date have covered many places across the globe, perspectives that link scientific studies to policymaking and practices are still limited. Challenges to applying GRACE data into practice result from their coarse resolution, which limits their utility at the smaller scales at which water management decisions are made. Another reason is that the data and related studies can be difficult to use and understand by policymakers and end-users. However, these challenges offer the GRACE scientific community opportunities to communicate with stakeholders, policymakers, and the public in raising awareness around groundwater sustainability issues. This paper addresses three questions: which GRACE data and GRACE-derived products can be useful for groundwater practices and management; how GRACE-derived groundwater messages can be better communicated with practitioners; and how to better operationalize GRACE-derived products for groundwater practice and management. This paper also aims to provide an agenda for the continued use of GRACE and GRACE-FO for the purpose of sustainable groundwater management. To gain insight into these questions, a policy Delphi survey was conducted to collect opinions of both the scientific and non-scientific communities. We made use of target search and snowballing techniques to identify suitable participants who are experienced groundwater researchers or practitioners, and who are familiar with GRACE. A total of 25 participants from around the world were surveyed (14 scientific and 11 non-scientific), and they provided thoughtful responses. We found that both communities acknowledged the potential of GRACE data and GRACE-derived products for groundwater management, and would be willing to collaborate to develop projects for practical applications. Better communication between researchers and practitioners was recommended as a key for the application of GRACE-derived products into practice. Practitioners noted their high demand for reliable data for their management responsibilities, but are more favorable towards locally observed data. The reliability of GRACE at small scales was an issue, even though some robust downscaling methods have been demonstrated down to local scales. The survey showed a desire for more comparison of GRACE-derived products to local measurements to determine whether GRACE products, e.g. downscaled data, can be useful for informing local decisions. Based on the survey, we proposed an agenda that helps to improve the usefulness of GRACE-derived products for practices. This agenda includes scientific recommendations that help to resolve the resolution and technical barriers for local applications, and professional perspectives that bridge the connection between science and policy, and facilitate communication for groundwater management.

How to cite: Xu, L., Famiglietti, J. S., Ferris, D., Huggins, X., Mohan, C., Sadri, S., Sanyal, P., and Wong, J. S.: From Coarse Resolution to Realistic Resolution: GRACE as a Science Communication and Policymaking Tool for Sustainable Groundwater Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10542, https://doi.org/10.5194/egusphere-egu22-10542, 2022.

EGU22-11819 | Presentations | ITS3.2/HS1.1.8

Adaptation to floods and droughts in (semi) arid transboundary basins: insights, barriers and opportunities drawn from socio-hydrogeological research in the Limpopo river basin, Southern Africa 

Jean-Christophe Comte, Luis Artur, Zareen Bharucha, Farisse Chirindja, Rosie Day, Joyce Dube, Fulvio Franchi, Josie Geris, Stephen Hussey, Eugene Makaya, Alessia Matano, Syed Mustafa, Edward Nesamvuni, Oluwaseun Olabode, Melanie Rohse, Simon Taylor, Sithabile Tirivarombo, and Anne Van Loon

The Limpopo river basin (LRB) is water-stressed and highly susceptible to floods and droughts. The impacts of floods and droughts on water availability and quality is increasing as a result of their increase in magnitude and frequency. The LRB encompasses a large diversity of physical and socio-economical characteristics spread across four Southern Africa countries (Botswana, Mozambique, South Africa and Zimbabwe). This dictates highly heterogeneous physical and human responses, coping mechanisms, and policy frameworks from local to transboundary scales.

Understanding the multidimensional connections that exist between and within flood and drought events and cycles, between various regions across the basin, between physical and social impacts, and between users and decision-makers, is critical to sustainable water resources management and long-term resilience to hydrological extremes.

The Connect4 Water Resilience project has brought together an international multidisciplinary team of hydrologists and social scientists from academia, policy, and practice to investigate the drivers and impacts of floods and droughts, and to promote solutions towards adaptation. In our research we deployed hydrological and geological investigations alongside community and governance interviews and workshops across the LRB to jointly feed in the application of a large-scale transboundary hydrological model of the LRB. Model assessment and future management scenario definition and analysis were implemented collaboratively with stakeholders across the basin, through iterative workshops at local, national, and transboundary scales.

Results so far revealed: (1) the high complementarity of physical (hydrological and sedimentological) and social (community narrative) data to reconstruct spatiotemporal dynamics and impacts of events, which has been crucial to model application in the basin affected by highly fragmented monitoring; (2) the observed increase in floods and droughts magnitude and frequency is not responsible for significant changes in groundwater recharge, suggesting that the general observed groundwater level decline is to be related to increasing abstraction, which in turn amplifies droughts; (3) flood severity and impacts are higher after droughts regardless of rainfall magnitude; (4) mitigation, through anticipatory action and preparation for floods and droughts at policy, user and community level is uneven and inadequately resourced, with generally some forms of preparation to droughts but little for floods; (5) the uptake of forecast and management recommendations from governments is patchy, while extension officers are playing a key role for communication and NGOs for training; (6) local stakeholder expertise and experience brought in during stakeholder workshops were critical to groundwater model conceptualisation, and management scenario definition and analysis; (7) preferred scenarios of management strategies, as collaboratively defined with stakeholders, were highly variable across the LRB countries and sub-regions, including preference for local water management (e.g. temporary flood water storage for subsequent droughts) in upstream upland regions vs large scale strategies (e.g. storage in dams) in downstream floodplain regions; however, hydrological model outputs showed that local/regional strategies have basin-scale (transboundary) impacts emphasizing the importance of transboundary cooperation and management of water resources and extreme events.

Research outcomes are being translated into tailored guidance for policy and practice including feeding in ongoing early warning system development and sustainable water resource management.

How to cite: Comte, J.-C., Artur, L., Bharucha, Z., Chirindja, F., Day, R., Dube, J., Franchi, F., Geris, J., Hussey, S., Makaya, E., Matano, A., Mustafa, S., Nesamvuni, E., Olabode, O., Rohse, M., Taylor, S., Tirivarombo, S., and Van Loon, A.: Adaptation to floods and droughts in (semi) arid transboundary basins: insights, barriers and opportunities drawn from socio-hydrogeological research in the Limpopo river basin, Southern Africa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11819, https://doi.org/10.5194/egusphere-egu22-11819, 2022.

EGU22-12846 | Presentations | ITS3.2/HS1.1.8

Reflections on collaboration and capacity-building for sustainable groundwater quality monitoring in rural Malawi 

Fortune Gomo, Sarah Halliday, Wiktor Chichlowski, Susan Chichloska, Harlod Zaunda, and Alistair Geddes

Drinking water quality is a key component of water security to ensure clean and safe water supplies to achieve the Global SDG6. Yet frequently there are capacity constraints on the adequacy and sustained water quality monitoring programs in LDC contexts, especially in rural areas where resources are more limited and the resident population is more reliant on scattered independent groundwater supplies. In Malawi, knowledge of the importance of water quality has been developing over recent years, necessitating local capacity development for sufficient and sustained water quality monitoring.

International, transdisciplinary, and interdisciplinary research collaboration and capacity-building efforts in rural water quality monitoring can be a vehicle to improve technology development that supports operational monitoring and data reporting in resource-poor settings. However, in cognate fields, similar international partnership models have drawn some criticism of late, because of their alleged tendency to not translate collaboration agreements into demonstrable local capacity gains. We, therefore, link our consideration of these issues specific to our direct input to efforts to create a new water quality testing program in rural southern Malawi in southern Africa in a collaborative research project between the University of Dundee and Fisherman’s Rest, a local NGO in Malawi. Fisherman’s Rest works with rural communities in Malawi, specifically borehole monitoring under the Madzi Alipo program. However, their work lacked the water quality monitoring component, a key element to water security. Using our reflections, we find that the line of critique on international collaborations has some value in terms of thinking about how to advance ‘genuine’ collaboration and capacity-building in water quality monitoring programs as we look to expand our collaboration efforts with Fisherman’s Rest and other stakeholders in rural water quality monitoring in Malawi.

How to cite: Gomo, F., Halliday, S., Chichlowski, W., Chichloska, S., Zaunda, H., and Geddes, A.: Reflections on collaboration and capacity-building for sustainable groundwater quality monitoring in rural Malawi, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12846, https://doi.org/10.5194/egusphere-egu22-12846, 2022.

EGU22-1187 | Presentations | ITS3.3/CL3.2.20

Resilience service technologies for identifying climate change adaptation gaps 

Finn Laurien, Ian Mccallum, Stefan Velev, and Reinhard Mechler

Communities around the world in natural hazard-prone regions are increasingly aware of the benefits of using spatio-temporal data to better understand their predicament. With the advent of new service technologies, such as web mapping, free and open satellite data and the proliferation of mobile technologies, the possibilities for both understanding and improving community resilience are on the rise. Resilience service technologies aim to provide risk-informed products in easy-to-use manner for enabling stakeholders to implement efficient and practical resilience activities in their communities.

This paper presents a service-oriented approach aiming to harnessing risks and resilience data in hazard-prone regions for raise awareness regarding early warning systems, safety conditions of minorities in community groups and plan for long-term resilience strategies. With our resilience dashboard platform, we utilize information of various risk and resilience services to identify and visualize susceptible hotspots for decision-makers. Our resilience dashboard also brings about the coordination between different web services to retrieve the features and impose the thresholds. We co-developed with local humanitarian and development teams the resilience dashboard which is designed to put geo-spatial flood resilience data into the hands of decision-makers. We identified three use cases which consider an added value of resilience service technologies by focusing on early warning systems, targeting minority groups and long-term resilience planning in Nicaragua, Nepal and Bangladesh. We will demonstrate the context-specific needs of resilience services technologies, how to target user needs and how it could potentially be scaled up and applied to similar regions around the world.

How to cite: Laurien, F., Mccallum, I., Velev, S., and Mechler, R.: Resilience service technologies for identifying climate change adaptation gaps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1187, https://doi.org/10.5194/egusphere-egu22-1187, 2022.

Climate model output is widely used as input to impact models. Such applications include hydrological, crop, energy modeling, and more.  However, due to model deficiencies and the stochastic nature of climate processes, some variables (e.g., daily precipitation) tend to present systematic biases and deviations from the observed conditions. This is particularly important when studying high-impact extreme events. The present study aims to develop a Copula-based method for bias-correcting modeled daily precipitation. Precipitation data are provided by two EURO-CORDEX regional climate models (KNMI-RACMO22E and CLMcom-CCLM4) and for two time periods (1981-2010 and 2031-2060). The demonstration area is the island of Cyprus, located in the eastern Mediterranean climate change hot-spot. Cyprus is characterized by a complex coastline and steep orography that drive the precipitation distribution. As a reference dataset, we used a high resolution (1x1km) gridded observational dataset, derived from a dense network of stations. For this application, we developed a copula-based structure scheme between the reference and the simulated data sets. This was for the historical period and each model grid cell. Then, assuming this relation remains unchanged, we corrected the biases for both study periods (historical and near future). Due to the stochastic nature of precipitation, the copula schemes were developed separately for each hydrological season (i.e., wet: November to March and dry: April to October). In addition, different copula schemes were developed for non-extreme and extreme events. The results showed that the proposed method could significantly improve the modeled precipitation for both models in 85% and 92% of grid cells, respectively. These improvements are evident throughout the year and for both extreme and non-extreme values. The climate change signal (precipitation decline near 7%) remains unchanged after applying the bias correction.

How to cite: Lazoglou, G., Zittis, G., and Bruggeman, A.: A novel, Copula-based approach for the bias correction of daily precipitation: a case study in the eastern Mediterranean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2518, https://doi.org/10.5194/egusphere-egu22-2518, 2022.

EGU22-3125 | Presentations | ITS3.3/CL3.2.20

From science to action - climate risk analyses to support adaptation policies and planning at a local level in northern Ghana 

Paula Aschenbrenner, Abel Chemura, Lemlem Habtemariam, Francis Jarawura, and Christoph Gornott

Agricultural production is highly weather-dependent in sub-Saharan Africa. Under climate change the risk of yield losses increases even further, posing a threat to farmers’ income and livelihood. Despite the availability of a wide range of adaptation strategies for the agricultural sector, information on their suitability at the local scale is limited.

In this session, we would like to discuss an example of a climate risk analysis that supports decision makers on a local scale in northern Ghana in adaptation planning. Using latest past and projected climate data as well as biophysical crop models, the study at first quantified climate impacts on agriculture.  Secondly, the suitability of different adaptation strategies was assessed under socio-economic and biophysical aspects using mixed methods including interviews, literature, a cost-benefit analysis and agricultural modelling. Differential vulnerabilities of farmers based on their identities were taken into account. Relevant stakeholders from Ghanaian local and national governmental institutions, civil society, academia, the private sector, practitioners and development partners were engaged throughout the study process in three workshops, selected the adaptation strategies and were consulted in various interviews.

Results show the dominant negative impacts of climate change on main staple crop yields in northern Ghana with differences according to region, crop and management possibilities of the farmer. The four analysed adaptation strategies (using improved seeds, cashew plantations alley cropped with legumes, Famer Managed Natural Regeneration and Irrigation) can all increase agricultural production and income while having differential positive co-benefits and negative side-effects. Unequal access to power, assets and land leads to differing opportunities in the uptake of suitable measures. Detailed recommendations for an implementation of the adaptation strategies ensuring an increased adaptive capacity of whole communities were developed and discussed with stakeholders. The information was prepared in policy briefs and short films.

How to cite: Aschenbrenner, P., Chemura, A., Habtemariam, L., Jarawura, F., and Gornott, C.: From science to action - climate risk analyses to support adaptation policies and planning at a local level in northern Ghana, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3125, https://doi.org/10.5194/egusphere-egu22-3125, 2022.

EGU22-3568 | Presentations | ITS3.3/CL3.2.20

2020 Vision: Using transdisciplinary approaches in understanding climate (in)action through youth led participation in mitigating hydrological extremes. 

Katie J. Parsons, Lisa Jones, Florence Halstead, Hue Le, Thu Thi Vo, Christopher R. Hackney, and Daniel R. Parsons

We are the midst of a climate emergency requiring urgent climate action that is, as yet, unforthcoming both on the scale, and at the speed, commensurate with the associated hazard and risk. This paper presents work that considers this current state of inaction and explores how we might understand the underpinning processes of attitudinal and behavioural change needed through the emotional framework of loss.

This inaction is also explored through the additional lens of the year 2020, a year of tumultuous social change created by the COVID–19 pandemic. The article draws parallels with and looks to learn from the ways in which the collective loss experienced as a result of COVID–19 may offer a sense of hope in the fight to adequately address climate change but how meeting the Sustainable Development Goals will require climate injustices to also be addressed. We argue that appropriate leadership that guides widespread climate action from all is best sought from those groups already facing the loss of climate change and therefore already engaged in climate-related social action and activism, including youth and Indigenous peoples.

In this regard we present work from an ongoing project based within the Red River catchment (Vietnam), which is already experiencing enhanced hydrological extremes. Resultant floods, landslides and soil erosion in the upper region is having impacts in communities, whilst relative sea-level rises in the region are affecting the frequency and magnitude of flooding. Our research is working with young people and their communities, alongside social and environmental scientists in partnership, to identify imaginative ways to mitigate these climate change challenges and foster action. The paper will outline how this youth-led approach explores how local, traditional, and indigenous knowledges can develop understandings and strengthen local and societal resilience, incorporating peer-to-peer, intergenerational and cross-/inter-cultural forms of collaborative, and socially just, learning.

How to cite: Parsons, K. J., Jones, L., Halstead, F., Le, H., Thi Vo, T., Hackney, C. R., and Parsons, D. R.: 2020 Vision: Using transdisciplinary approaches in understanding climate (in)action through youth led participation in mitigating hydrological extremes., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3568, https://doi.org/10.5194/egusphere-egu22-3568, 2022.

EGU22-4090 | Presentations | ITS3.3/CL3.2.20

Evaluation of co-creation processes in climate services  -  Development of a formative evaluation scheme 

Elke Keup-Thiel, Sebastian Bathiany, Markus Dressel, Juliane El Zohbi, Diana Rechid, Susanne Schuck-Zöller, Mirko Suhari, and Esther Timm

 

Climate change increasingly affects all parts of society. Different economic sectors such as the agricultural sector have to adapt to climate change. More and more climate services are being developed in order to support this adaptation to climate change with accurate and suitable products. Good practises for the design of climate services include transdisciplinary approaches and co-creation of climate service products. The development of usable and useful climate service products and effective adaptation measures requires constant interactions between climate service providers and users of the products. To assess the effectiveness of these co-creation endeavours, continuous evaluation is crucial. At present, output and outcome assessments are conducted occasionally in this research field. However, these summative evaluations that are preformed ex-post do not help to adjust the ongoing process of co-creation. Therefore, the focus of the presented work is on formative evaluation of the co-creative development of science-based climate service products. A formative evaluation is done during the run-time of a project with the intention to reflect and readjust it. For this purpose, we analysed in detail the process of co-creation of climate service products in the knowledge transfer project ADAPTER (ADAPT tERrestrial systems, https://adapter-projekt.org/) and combine this analysis with a systematic literature review. In ADAPTER, simulation-based climate service products are developed together with key partners and practitioners from the agricultural sector, with the aim of supporting decision making in the context of climate change adaptation.

As a first step, main characteristics of the product development process were identified empirically and six sub-processes of product development were determined. Secondly , questions for a formative evaluation were assigned to the different steps and sub-processes. Thirdly, a literature review including fields other than climate services delivered additional qualitative aspects. As a result, a scheme of quality criteria and related assessment questions for the different sub-processes in climate service development was created, based on both empirical and theoretical work. Subsequently, this scheme needs validation and testing. The resulting formative evaluation scheme will be particularly helpful to reflect on and to improve the co-creation processes in climate services and beyond.

 

How to cite: Keup-Thiel, E., Bathiany, S., Dressel, M., El Zohbi, J., Rechid, D., Schuck-Zöller, S., Suhari, M., and Timm, E.: Evaluation of co-creation processes in climate services  -  Development of a formative evaluation scheme, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4090, https://doi.org/10.5194/egusphere-egu22-4090, 2022.

EGU22-4596 | Presentations | ITS3.3/CL3.2.20

Cost-effective measures for climate change adaptation in a drought-prone area in eastern Germany 

Beate Zimmermann, Christian Hildmann, Sarah Kruber, Johanna Charlotte Witt, Astrid Sturm, Lutz Philip Hecker, and Frank Wätzold

Adaptation to climate change is an inevitable challenge in many regions. In our study area, which is located in the state of Brandenburg in eastern Germany, land use is increasingly affected by long-lasting soil moisture deficits in the vegetation period. It is therefore important to implement measures for water retention at the landscape scale that postpone and mitigate the severity of these drought periods. Our objective is to identify cost-effective measures in a manner that maximizes expected ecological benefits for available budgets. For this purpose, we combine a scientific analysis of the determinants of land surface temperature with site-specific cost calculations.

The distribution of land surface temperature serves as a proxy for environmental conditions that favor water retention and, as a consequence, provide a certain cooling effect during hot and dry periods. Landsat thermal images from the vegetation seasons of 2013 to 2020 were rescaled (min-max normalization) and used as the response variable for a Bayesian multilevel model. Several parameters of the physical environment such as land cover, forest and crop type, soil water holding capacity, canopy cover and degree of soil sealing were used as explanatory variables. In addition, an antecedent moisture index and potential evapotranspiration at time of satellite overpass were incorporated into the model. First results highlight the importance of land use and canopy cover for land surface temperature distribution. In general, the analysis enables the identification of overheated landscapes. Moreover, model predictions after hypothetical implementation of adaptation measures provide an ecological benefit assessment based on the cooling capacities. We also determine the costs of the different measures in a spatially differentiated manner. An integrated modeling procedure combines the results from the ecological and economic assessments.

In this contribution, we will present the results of the Bayesian modeling and discuss a first example of the cost-effectiveness analysis in an agricultural landscape.

How to cite: Zimmermann, B., Hildmann, C., Kruber, S., Witt, J. C., Sturm, A., Hecker, L. P., and Wätzold, F.: Cost-effective measures for climate change adaptation in a drought-prone area in eastern Germany, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4596, https://doi.org/10.5194/egusphere-egu22-4596, 2022.

EGU22-7773 | Presentations | ITS3.3/CL3.2.20

Challenges and approaches in transdisciplinary climate change adaptation projects 

Jan-Albrecht Harrs and Kevin Laranjeira

Climate risks and the appropriate climate change adaptation (CCA) strategies and solutions are highly localized, as they are dependent on the local climate signal, normative assessments on associated risks and the capacities and motivation of municipalities to plan and implement adaptive measures. Research projects trying to explore and pilot applied local solutions therefore need to co-develop recommendations with local practitioners and stakeholders.

Even though a diverse landscape of climate information (CI) is already available and many municipalities know which risk they may face, knowledge and skills on how to interpret, apply and integrate this information in adaptation action is regarded as necessary. Different, albeit non-representative surveys among municipalities in Germany show that more cities are engaging in developing concepts and strategies (Hasse & Willen, 2018; Hagelstange et al., 2021; Handschuh et al., 2020), but that more practice-oriented information on how to identify regional and local vulnerabilities, evaluate efficient adaptive measures, and identify and build up adaptive capacities is needed (Handschuh et al., 2020; Kahlenborn et al., 2021; BBSR, 2016).

Based on an extensive literature analysis of journal articles, research project reports and strategic policy document as well as the experience of accompanying six transdisciplinary research projects, the following categorization of challenges will be presented:

  • Governance
  • Adaptive capacities
  • Integrative assessment of adaptive measures
  • Climate model data and information
  • Transdisciplinary work in applied research projects

Drawing on insights on the challenges, a list of recommendations for increasing the use-value of climate information and knowledge for CCA in municipalities is outlined. Tackling these five challenges through co-creating and inserting CI and services into municipal procedures and systems can then address the “last mile problem” (Celliers et al., 2021) of CI and support the lagging implementation of CCA.

In order to conduct impactful transdisciplinary research projects, the specific governance context of municipalities needs to be explored. A survey shows that spatial planning not environmental departments implement most CCA measures (EEA, 2020), whereas planning often lacks climate awareness (Skelton, 2020), signifying the need for cross-departmental approaches. Likewise, the understanding and possible usages of CI needs to be conveyed through appropriate transdisciplinary methods.    

How to cite: Harrs, J.-A. and Laranjeira, K.: Challenges and approaches in transdisciplinary climate change adaptation projects, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7773, https://doi.org/10.5194/egusphere-egu22-7773, 2022.

EGU22-7810 | Presentations | ITS3.3/CL3.2.20

Climate X: Meeting the demand for multi-hazard climate risk information tailored to financial services 

Markela Zeneli, Claire Burke, Laura Ramsamy, Hamish Mitchell, James Brennan, and Kamil Kluza

As uncertainty around the impacts of climate change become more apparent, businesses and communities are relying on cutting-edge information to help them navigate their next steps. Climate X are a climate risk information provider that aims to help businesses and communities prepare for a rapidly changing environment, with an explainable and transparent method.

Our flagship product, Spectra, presents users with a multitude of potential hazards including flooding (fluvial, pluvial, and coastal), subsidence, landslides, and extreme heat. Each hazard risk is quantified at street level, and we project risks and impacts for low emissions (RCP2.6) and high emissions (RCP8.5) scenarios. This allows users to see the difference between the best-case and worst-case scenarios for assets across the UK.

This poster will cover our methods of finding data, interpolating, modelling, and predicting, as well as a tour of our easy-to-use UI.

How to cite: Zeneli, M., Burke, C., Ramsamy, L., Mitchell, H., Brennan, J., and Kluza, K.: Climate X: Meeting the demand for multi-hazard climate risk information tailored to financial services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7810, https://doi.org/10.5194/egusphere-egu22-7810, 2022.

EGU22-8085 | Presentations | ITS3.3/CL3.2.20

KlimaKonform – An interdisciplinary project to support smaller communities in climate change adaptation 

Christian Bernhofer, Majana Heidenreich, Verena Maleska, Reinhard Schinke, and Niels Wollschläger

How do we succeed in supporting climate adaptation also outside of the large urban areas? Which measures for mitigating do we need to deal with the consequences of climate change? What support and which tools do smaller communities need in planning and implementing necessary measures? What is specific for low mountain ranges? To answer these questions a targeted process was initiated by researchers and practitioners from three German federal states (Saxony, Saxony-Anhalt and Thuringia), working together in the interdisciplinary project KlimaKonform within BMBF RegIKlim.

The model region covers three counties in the catchment area of the Weiße Elster. The low mountain region is typical for large parts of Germany and other Central European countries. Thus, the approach, experiences, methods and products are easily transferable to other low mountain ranges. Small and medium-sized municipalities have to deal often with limited budgets, as well as limited technical and administrative capacities. Community income is mainly generated by agriculture and forestry, small businesses and partly tourism. At the same time, the challenges posed by the increasing intensity and frequency of extreme events such as flash floods, water shortage, heat waves and storms are similar to large cities with much higher capacities in personnel and finances.

Unfortunately, adaptation to extreme weather and climate change often comes only after a damaging event, for example after extreme precipitation destroyed the municipal water infrastructure (paths, sewer network, and waste water treatment plants). KlimaKonform supports communities to become active before damage occurs and thus foster the move from event-related to preventive and strategic action. Therefore, KlimaKonform offers new concepts and customised tools to assess the impacts of climate change, determine their capacities for adaptation and derive appropriate measures. The tools will consider the needs in the model region and address the uncertainties related to future climate change and climate model output.

Examples are given for various foci of the project. One focus of KlimaKonform involves the interdisciplinary assessment of extreme events by coupled model chains ranging from climate change ensembles to third order impact models. Hazards as heavy rainfall and floods with their impacts are incorporated. The location in the low mountain range requires high-resolution climate input data for modelling due to corresponding high flow velocities. These data are not sufficiently available for regional climate impact modelling. In cooperation with the project NUKLEUS and hydro-impact modellers in RegIKlim, approaches like bias adjustment of climate model outputs are tested for applicability. The aim is to reduce uncertainties in model application while increasing the effectiveness of precautionary and adaptation measures. Another focus of KlimaKonform is the systematic identification of vulnerable infrastructure during heat waves. In this context, urban climate simulations are used to assess the potential of green infrastructure to reduce outdoor and indoor heat stress conditions. All results of KlimaKonform will be available free of charge and in a comprehensible form via a freely accessible internet platform. Here, the already existing and well-received Regional Climate Information System ReKIS will be expanded to provide guidance for smaller communities.

How to cite: Bernhofer, C., Heidenreich, M., Maleska, V., Schinke, R., and Wollschläger, N.: KlimaKonform – An interdisciplinary project to support smaller communities in climate change adaptation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8085, https://doi.org/10.5194/egusphere-egu22-8085, 2022.

EGU22-9299 | Presentations | ITS3.3/CL3.2.20

Dealing with climate data uncertainty for agricultural impact assessments in West Africa 

Paula Aschenbrenner, Stephanie Gleixner, and Christoph Gornott

West Africa is characterized by high variability in climate, has a fast growing population, and is home to a population strongly reliant on rainfed agriculture. The largely weather-dependent agricultural production is now further at risk under increasing climate change. To adequately address climate risks and avoid further pressure on food security, evidence-based information on climate impacts and guidance on the suitability of adaptation measures is required. Simulations of regional impacts of climate change on crop production are strongly influenced by the climate data used as input. The selection of climate forcing data is most influential in regions with high uncertainties in past climate data and where the agricultural production varies greatly under climate variability (Ruane et al., 2021). Both is the case in West Africa, calling for an improved understanding of past and future climate data for its use in agricultural modelling over the region. 

In this session we want to contribute to an increased understanding on the usability of different past and future climate data sets for agricultural impact models over West Africa. In a recent study, we compared ten CMIP6 (Coupled Model Inter-comparison Project Phase 6) models and their respective bias-adjusted ISIMIP3b (Inter-Sectoral Impact Model Intercomparison Project Phase 3b) versions against different observational and reanalysis data sets. Focusing on their use for agricultural impact assessments we centred the analysis on climate indicators highly influencing agricultural production and their representation in the different climate data sets.

Results show that the ten CMIP6 models contain regional and model dependent biases with similar systematic biases as have been observed in earlier CMIP versions. Although the bias-adjusted version of this data aligns overall well with observations, we could detect some regional strong deviations from observations in agroclimatic variables like length of dry spells and rainy season onset. The use of the multi-model ensemble mean has resulted in an improved agreement of CMIP6 and the bias-adjusted ISIMIP3b data with observations. Choosing a subensemble of bias-adjusted models could only improve the performance of the ensemble mean locally but not over the whole region. The results of this study can support agricultural impact modelling in quantifying climate risk hotspots as well as suggesting suitable adaptation measures to increase the resilience of the agricultural sector in West Africa.

How to cite: Aschenbrenner, P., Gleixner, S., and Gornott, C.: Dealing with climate data uncertainty for agricultural impact assessments in West Africa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9299, https://doi.org/10.5194/egusphere-egu22-9299, 2022.

EGU22-10358 | Presentations | ITS3.3/CL3.2.20 | Highlight

Distance to cool spots, a practical design guideline for heat resilient urban areas 

Jeroen Kluck, Laura Kleerekoper, Anna Solcerova, Stephanie Erwin, Lisette Klok, Monique de Groot, and Arjen Koekoek

In the Netherlands municipalities are searching for guidelines for a heat resilient design of the urban space. One of the guidelines which has recently been picked up is that each house should be within a 300 meter of an attractive cool spot outside. The reason is that houses might get too hot during a heat wave and therefor it is important that inhabitants have an alternative place to go. The distance of 300 m has been adopted because of practical reasons. This guideline has been proposed after a research of the University of Amsterdam of applied sciences and TAUW together with 15 municipalities.

To help municipalities to take cool spots into account in their urban design the national organization for disseminating climate data has developed a distance to coolness map for all Dutch built up areas. This map shows the cool spots with a minimum of 200 m2 based on a map of the PET for a hot summer day (2*2 m2 spatial resolution). Furthermore the map shows the walking distance for each house (via streets and foot paths) to the nearest cool spot.

This map helps as a starting point. Because not all cool spots are attractive cool spots. A research in 2021 showed what further basis and optional characteristics those cool spots should have: e.g. sufficiently large, combination of sun and shadow, benches, quiet, safe and clean. In fact those places should be attractive places to stay for most days of the year.

With the distance to attractive cool spots municipalities can easily see which areas lack attractive cool spots. The distance to cool spot maps is therefore a way to simplify complex climate data into an understandable and practical guideline. This is an improvement as compared to using thresholds for temperatures and thresholds for duration of exceedance of those temperatures in a guideline.: Municipalities like this practical approach that combines climate adaptation with improving the livability of a city throughout the year.

How to cite: Kluck, J., Kleerekoper, L., Solcerova, A., Erwin, S., Klok, L., de Groot, M., and Koekoek, A.: Distance to cool spots, a practical design guideline for heat resilient urban areas, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10358, https://doi.org/10.5194/egusphere-egu22-10358, 2022.

EGU22-10396 | Presentations | ITS3.3/CL3.2.20

Bias adjustment of RCM simulations in high-latitude catchments: complexity versus skill in a changing climate 

Claudia Teutschbein, Faranak Tootoonchi, Andrijana Todorovic, Olle Räty, Jan Haerter, and Thomas Grabs

For climate-change impact studies at the catchment scale, meteorological variables are typically extracted from ensemble simulations provided by global (GCMs) and regional climate models (RCMs), which are then downscaled and bias-adjusted for each study site. For bias adjustment, different statistical methods that re-scale climate model outputs have been suggested in the scientific literature. They range from simple univariate methods that adjust each meteorological variable individually to more complex and statistically as well as computationally more demanding multivariate methods that take existing relationships between meteorological variables into consideration. While several attempts have been made over the past decade to evaluate such methods in various regions, there is no guidance for choosing an appropriate bias adjustment method in relation to the study question at hand. In particular, the question whether more complex multivariate methods are worth the effort by resulting in better adjustments of a wide range of univariate, multivariate and temporal features, remains unanswered. 
We here present an approach to systematically assess the performance of the most commonly used univariate and multivariate bias adjustment methods at different catchment scales in Sweden. Based on a multi-catchment and multi-model approach, we evaluated numerous univariate, multivariate and temporal features of precipitation, temperature and streamflow. Finally, we discuss potential benefits (skills and added value) and trade-offs (complexity and computational demand) of each method, in particular for hydrological climate-change impact studies in high latitudes.

How to cite: Teutschbein, C., Tootoonchi, F., Todorovic, A., Räty, O., Haerter, J., and Grabs, T.: Bias adjustment of RCM simulations in high-latitude catchments: complexity versus skill in a changing climate, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10396, https://doi.org/10.5194/egusphere-egu22-10396, 2022.

EGU22-11337 | Presentations | ITS3.3/CL3.2.20

AGRICA - Climate Risk Profiles for Sub-Saharan Africa 

Stephanie Gleixner, Julia Tomalka, Stefan Lange, and Christoph Gornott

Many countries recognise the importance of adaptation to climate change, but have limited access to reliable information on climate impacts and risks which should inform the selection of adaptation strategies. The AGRICA Climate Risk Profiles (CRPs) provide a condensed overview of present and future climate impacts and climate risks for different sectors. Based on projections from four climate models under two Greenhouse Gas emission scenarios, climate and climate impact data from the ISIMIP project is used to assess changes in climate, water resources, agriculture, infrastructure, ecosystems and human health. To date, CRPs have been published for 12 countries in sub-Saharan Africa and further CRPs are currently being developed both under the AGRICA project as well as in collaboration with external organisations. The CRPs are intended to inform decision makers from governments, international institutions, civil society, academia and the private sector regarding the risks of climate impacts in key sectors. The findings can feed into national and sub-national climate adaptation planning including NDC and NAP development, implementation and review, but also provide useful information and evidence at other strategic planning and implementation levels.

How to cite: Gleixner, S., Tomalka, J., Lange, S., and Gornott, C.: AGRICA - Climate Risk Profiles for Sub-Saharan Africa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11337, https://doi.org/10.5194/egusphere-egu22-11337, 2022.

EGU22-11372 | Presentations | ITS3.3/CL3.2.20

Local downscaling of temperature projections for energy planning purposes in an Alpine area 

Dino Zardi, Lavinia Laiti, and Lorenzo Giovannini

Medium- and long-term energy planning at regional scale requires, among others, the estimate of the future energy demand driven by expected  heating and cooling needs of buildings, according to the local impact of changing climate. To support the development of the 2021-2030 Energy Plan of the Province of Trento in the Alps, temperature projections provided by EURO-CORDEX Regional Climate Models (RCMs) were downscaled at 11 weather stations, representative of altitudes between 0 and 700 m a.m.s.l., to estimate the future values of a set of parameters that are commonly used to model the energy demand of buildings, such as: Heating and Cooling Degree Days (HDDs and CDDs), Test Reference Years (TRYs) and Extreme Reference Years (ERYs). A dataset of temperature and solar radiation hourly measurements, taken at the stations starting from 1983, was quality-controlled and analyzed to estimate statistics and observed trends for both variables, as well as degree days, reference years and climate change indices from the ETCCDI set. A hybrid downscaling approach (combining statistical and dynamical techniques) is then applied to temperature projections, based on the application of the morphing method to the results of an ensemble of 16 RCMs, allowing the estimate of future TRYs, ERYs and degree days in 2030 and 2050 at the selected sites (notice that no significant variation associated with climate change was assumed for solar radiation). According to historical observations (1983-2019), the warming tendency for monthly mean temperatures is clear and falls around 0.06 °C year-1, slightly higher than reported at national level. The increase is more pronounced in spring and summer than in autumn and winter, with minima in December and especially May. No significant trend is observed for solar radiation trends. As for HDDs, stations at different altitudes show comparable reductions, of around -10 HDDs year-1, with an apparent tendency to accelerate in the most recent years. The increase of CDDs can be quantified in less than 5 CDDs year-1. The ensemble of temperature projections estimate temperature increases of 0.5 °C between 2016 and 2030 and 1.3 °C between 2016 and 2050 on average (0.03-0.04 °C year-1), implying further future reductions of HDDs (between -4 and -11% at 2030, between -10 and -21% at 2050) and increases of CDDs (between 12 and 36% at 2030, between 36 and 87% at 2050). Such changes will correspond to major modifications in the seasonal profile of the energy demand associated with the winter heating and summer cooling of buildings in the Alpine area.

How to cite: Zardi, D., Laiti, L., and Giovannini, L.: Local downscaling of temperature projections for energy planning purposes in an Alpine area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11372, https://doi.org/10.5194/egusphere-egu22-11372, 2022.

EGU22-11838 | Presentations | ITS3.3/CL3.2.20

ClimateLynx. Generating global climatic linkages 

Clemens Rendl, Ramiro Marco Figuera, and Stefano Natali

Negative effects of climate change lead to diverse and extensive impacts. While some regions are more vulnerable than others to uncertain outlooks, reliable tools to assess climate risks, drive decisions and turn threats into opportunities are increasingly needed. Geospatial environmental data are globally available, covering populated as well as remote areas. The pool of data reaches back decades in time and grows day by day. Satellite data play a crucial role in improving the multi-dimensional description of the Earth system. This invaluable resource, when merged with socio-economic information and other open and free datasets, enables us to better understand dynamics of a globally changing climate and thus rapid and sound decision making.

ClimateLynx is a knowledge management system for climate related data and information. A knowledge base, also called “second brain”, is a tool that supports creating relationships between data and information to help think better. In our proposed service, the knowledge we want to gather, explore and exploit is data relevant for climate change induced decision making. Our vision is to create a constantly growing and evolving climate change knowledge graph supporting decision and policy makers to contribute to the sustainable development and helping us to move closer to achieving current and future climate pledges, and eventually a more sustainable future for all. ClimateLynx includes climate data and data from interdisciplinary domains alike, such as socio-economy (WB[1], ADB[2]) or health (WHO[3]). The scope is to fuse these data and thus generate location and time relevant insight. This way, a holistic approach to strengthen resilience is fostered. When the data pools are fused and put into context, it is possible to generate connections and correlations between indicators of different domains. The combination and linkage of inter-domain specific indicators could help to better understand interdisciplinary climate change induced global dynamics and tail effects. Moreover, non-obvious linkages between indicators or domains could be highlighted or even uncovered. With the help of such a tool, it could be possible to detect negative emerging climate trends based on the time series analysis of indicators earlier and react adequately.

ClimateLynx focuses on urban regions and is devoted to decision makers, urban planners and data experts. Urban planners can take advantage of ClimateLynx through comparing initiatives and developments with other cities of e.g., similar size, climatic conditions, or GDP. This enables for efficient planning and can support ideas and initiatives to create more liveable and climate resilient cities. Likewise, data experts might be interested to explore the various data sets and create new connections through linking indicators from natural and social science disciplines and thus discovering location relevant specificities.

ClimateLynx is built on top of the data access and processing capabilities of the ADAM[4] platform, to quickly access and process large volumes of data. Through ADAM, ClimateLynx is fed with climate indicators calculated from data from historic, currently operating, and future satellite missions. Global climate indicators are computed periodically, city-aggregated information is extracted off-line to offer optimal user experience.


[1]https://data.worldbank.org/
[2]https://www.adb.org/what-we-do/data/main
[3]https://www.who.int/data/collections
[4]https://adamplatform.eu/

How to cite: Rendl, C., Figuera, R. M., and Natali, S.: ClimateLynx. Generating global climatic linkages, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11838, https://doi.org/10.5194/egusphere-egu22-11838, 2022.

EGU22-11883 | Presentations | ITS3.3/CL3.2.20

Selection of CMIP6 Global Climate Models for long-term hydrological projections 

Huong Nguyen Thi, Ho-Jun Kim, Min-Kyu Jung, and Hyun-Han Kwon

Selection of a suitable Global Climate Model (GCM) in hydrological research for basin-scale of monsoon affected regions under future climate projection scenarios is a great necessity. This study comprehensively evaluated the suitability of 25 available GCMs issued of  Coupled Model Intercomparison Project 6 (CMIP6) to choose the best performing GCMs in precipitation simulating skill over the whole main River Basin System in South Korea for the historical period of 1973–2014. Bilinear interpolation method was used for mapping the grid resolution of the simulated GCMs precipitation and observed precipitation with a 0.1250 x 0.1250 resolution. Where, the observed monthly precipitation at 56 automated weather stations from 1973 to 2014 were derived from the Korea Meteorological Administration (KMA). Multi-Criteria Decision Making (MCDM) approach based on four spatial metrics, Cramer’s V, Goodman-Kruskal (GK) Lambda, Mapcurves and TheilU were proposed to compare the simulated GCMs precipitation with the observed precipitation. To calculate the overall ranking of the GCMs and identify the best performing GCMs, this study applied Jenks Natural Break classification based on the Compromise Programming index. The results indicated that: 1) The GCMs performance was different with different spatial indices with the most suitable of GCMs ranking for each watershed. 2) The best performing GCMs well simulated the annual mean precipitation with a bias of less than 15% for southwestern watersheds and higher biases (30-50%) for remaining watersheds. 3) Majority of CMIP6 GCMs could be capture trends and the spatial distribution of annual, seasonal precipitation over South Korea. However, the result was also found that most GCMs underestimated summer precipitation and overestimated spring precipitation. Therefore, the selected GCMs with corrected biases can be usefully employed for analyzing future changes of hydrological pattern associated with climate change projections.

Keywords: Global Climate Models (GCMs), CMIP6, Bilinear interpolation, Multi-Criteria Decision Making, Jenks Natural Break classification.

How to cite: Nguyen Thi, H., Kim, H.-J., Jung, M.-K., and Kwon, H.-H.: Selection of CMIP6 Global Climate Models for long-term hydrological projections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11883, https://doi.org/10.5194/egusphere-egu22-11883, 2022.

EGU22-12037 | Presentations | ITS3.3/CL3.2.20

APENA3 – Methodology and steps for the preparation of three pilot climate adaptation strategies and implementation plans in Ukraine 

Christos Tsompanidis, Svitlana Krakovska, Theofanis Lolos, Antonios Sakalis, Eleni Ieremiadi, Alex Gittelson, Oksana Kysil, and Alla Krasnozhon

APENA 3 ”Strengthening the capacity of regional and local administrations for implementation and enforcement of EU environmental and climate change legislation and development of infrastructure projects” is an EU-funded project, targeting to effectively raise Ukrainian public authorities capacities at local and regional level in designing and implementing key reforms. The main feature of Component 3 is the development of climate adaptation strategies followed by implementation plans for three Ukrainian Oblasts. Following an evaluation process, the Oblasts of Ivano-Frankivs’ka, L’vivs’ka and Mykolaivs’ka were determined to be the most appropriate in which to undertake the above activities. The first important step, was to identify the sectors of interest in relation to climate change, in Oblast but also National level utilizing the experiences and know-how from Europe and internationally, since the pilot Strategies and Implementation Plans, will be used as a guidance for other Oblasts in the future to elaborate regional climate adaptation planning. The selected sectors include agriculture, forests, biodiversity and ecosystems, water management, fisheries, coastal areas, tourism, critical infrastructure, energy, health, built environment and cultural heritage. The next step in the methodology is the vulnerability and risk assessment. The project team will identify the appropriate climate indices to evaluate vulnerability and risk based on specific climatic impact drivers for the respective sectors. Sensitivity and exposure analysis will follow in order to identify the degree of vulnerability of each sector and geographic area in the three pilot Oblasts. Based on the previous assessment, impacts will be identified and examined in terms of likelihood and severity, guiding the team to determine the risk. The various challenges (stakeholder engagement, sectoral issues identification, collection of climate data etc.) in the use of climate data will be identified and tackled in this stage. Following the preparation of the project’s scientific basis, the Experts team will determine sectoral adaptation thematic pillars, that will include horizontal and location specific measures and actions for the evaluated sectors.

How to cite: Tsompanidis, C., Krakovska, S., Lolos, T., Sakalis, A., Ieremiadi, E., Gittelson, A., Kysil, O., and Krasnozhon, A.: APENA3 – Methodology and steps for the preparation of three pilot climate adaptation strategies and implementation plans in Ukraine, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12037, https://doi.org/10.5194/egusphere-egu22-12037, 2022.

EGU22-12415 | Presentations | ITS3.3/CL3.2.20

RoCliB - Bias corrected CORDEX RCM dataset over Romania 

Alexandru Dumitrescu, Vlad Vlad Amihăesei, and Sorin Cheval

Four climate parameters (i.e., maximum, mean and minimum air temperature and precipitation amount) from 10 regional climate models, provided by the EURO-CORDEX initiative, are adjusted using as reference the ROCADA gridded dataset. The adjustment was performed on a daily temporal resolution for the historical period (1971-2005), as well as for climate change scenarios based on two Representative Concentration Pathways (RCP45 and RCP85).

The best method for bias-correction was selected following a 2-fold cross-validation approach, which was performed on historical data using two methods: Quantile Mapping (QMAP) and Multivariate Bias Correction with N-dimensional probability (MBCn). The performances of the two methods are very similar when analysing the frequency distribution of each selected variable, whereas the comparison between the inter-variables correlation of the adjusted datasets and the reference dataset revealed much smaller differences for the dataset adjusted with the multivariate method, hence this was used for producing the BC climate scenario dataset.

Based on the MBCn adjusted dataset, a climate change analysis over Romania was performed at the seasonal and annual scales. Overall, for the multimodel ensemble mean, at the country level, a substantial temperature increase is reported for both scenarios and no significant trend is revealed for precipitation amount.

The adjusted RCMs are provided without any restrictions via an open-access repository in netCDF CF-1.4-compliant file format (https://doi.org/10.5281/zenodo.4642463). The BC climate models are archived at the 0.1° spatial resolution, in the WGS-84 coordinate system, at a daily temporal resolution. Based on bias-corrected dataset, relevant information about climate change over Romania’s territory is provided by using an interactive dashboard, implemented in an open-source web application (RoCliB data explorer - http://suscap.meteoromania.ro/roclib).

 

 

How to cite: Dumitrescu, A., Vlad Amihăesei, V., and Cheval, S.: RoCliB - Bias corrected CORDEX RCM dataset over Romania, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12415, https://doi.org/10.5194/egusphere-egu22-12415, 2022.

EGU22-12534 | Presentations | ITS3.3/CL3.2.20

A Study on Heavy Rainfall and Flash Floods Using Different Climate Toolboxes 

Inna Khomenko and Roshanak Tootoonchi

Under the climate change, extreme precipitation responsible for flash floods, which can cause significant economic losses and human casualties, become more frequent and severe. These escalations are expected to become higher due to global warming which leads to increased water vapor in the atmosphere and thus, intensified precipitation events. Recent reports show that most flood events in Italy constitute flash floods, therefore it is projected for the region of Italy to be increasingly affected by flood events caused by heavy precipitation.

In this study, trends in extreme precipitation for present day and future projections up to 2100 under the worst-case scenario of warming, namely the Representative Concentration Pathway (RCP) 8.5 scenario are investigated using Copernicus and KNMI Climate Explorer databases.

On the basis of extremely easy-to-use KNMI Climate Explorer database anomalies of RX1day (1981-2010 reference period) for historical period and up to 2100 are retrieved for 7 Italian cities highly affected by flash floods (Venice, Rome, Naples, Genoa, Cagliari, Catanzaro, Palermo).For the mentioned regions the strong positive trends are calculated and the highest positive anomalies up to 50-80 mm/day are observed in the half of the XXI century.

The Copernicus toolbox editor was used to retrieve the RX1day index and 95th percentile from present day simulation (2011–2020) and future projection (2021–2100) of global precipitation from a total of 18 bias adjusted Global Climate Models from CMIP5 and precipitation time series for 7 Italian cities were extracted in order to obtain the trends. RX1day index doesn’t show significant increasing trend. Moreover, for the 95th percentile negative trends are obtained for most of the Italian cities in question.

Since heavy rainfalls are usually caused by convective precipitation, near surface convective precipitation trends for the period of 1991 to 2020 are derived from ERA5 monthly averaged reanalysis for the Mediterranean region and Italy, for the months in which the flash floods are often observed. The most significant increases in convective precipitation are obtained in July for Northern Italy, and in September for Southern Italy, and in November for the west coast zone.

It can therefore be said that for the historical data the positive trends in precipitation are dominated. However, for different projections and climate models from different database different results, sometimes even opposite results, are obtained.

How to cite: Khomenko, I. and Tootoonchi, R.: A Study on Heavy Rainfall and Flash Floods Using Different Climate Toolboxes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12534, https://doi.org/10.5194/egusphere-egu22-12534, 2022.

We share our experiences for impact and adaptation studies, by presenting results of a climate modelling study, which is based on ERA5 data at different horizontal resolutions, i.e., down from approximately 300 km to 25 km. The ERA5 data is used as a meteorological constraint (nudging) to perform a numerical model study on the influence of horizontal resolution on aerosol hygroscopic growth effects on meteorology in urban and remote atmospheric locations. For this sensitivity study we only switch on/off the associated aerosol water mass. Aerosol water is crucial form climate impact and adaptation studies as it links air pollution with weather and climate through direct and indirect radiative feedbacks. We try to separate urban from continental-scale effects using the EMAC atmospheric chemistry climate and Earth system model. EMAC is applied globally in various horizontal resolutions, in a set-up similar to our previous PMAp evaluation study (https://www.eumetsat.int/PMAp), i.e., resolving weather time-scales. We compare our EMAC results of the aerosol optical depth (AOD) against CAMS reference simulations (40 km), various satellite data (MODIS-Aqua/Terra, PMAp) and AERONET surface observations (~ 30km radius around the instrument). While CAMS REA includes AOD data assimilation (Modis/PMAp), EMAC calculates the AOD ab initio from size-resolved aerosol hygroscopic growth without any data assimilation, and with an option to include aerosol-cloud feedbacks. Our results show that the EMAC AOD results are within the range of CAMS and satellite AOD. Aerosol water effect on AOD is noticeable for nudged and free running EMAC versions at both, urban and remote locations. The aerosol water effect is larger for free running EMAC versions, and more pronounced for urban AERONET sites, e.g., Hamburg, Karlsruhe, Thessaloniki, Zaragoza. The moisture feedback with air pollution is resolution dependent (time and space). Generally, this becomes more relevant with increasing resolution due to finer moisture and air pollution gradients, which is an indication for the importance of horizontal resolution for impact and adaptation studies.

How to cite: Metzger, S., Feigel, G., Steil, B., Rémy, S., and Christen, A.: Influence of horizontal resolution on aerosol hygroscopic growth effects in urban andremote boundary layers in the context of climate impact and adaptation studies, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13495, https://doi.org/10.5194/egusphere-egu22-13495, 2022.

EGU22-1024 | Presentations | ITS3.5/NP3.1

Efficiency and synergy of simple protective measures against COVID-19: Masks, ventilation and more 

Ulrich Pöschl, Yafang Cheng, Frank Helleis, Thomas Klimach, and Hang Su

The public and scientific discourse on how to mitigate the COVID-19 pandemic is often focused on the impact of individual protective measures, in particular on vaccination. In view of changing virus variants and conditions, however, it seems not clear if vaccination or any other protective measure alone may suffice to contain the transmission of SARS-CoV-2. Accounting for both droplet and aerosol transmission, we investigated the effectiveness and synergies of vaccination and non-pharmaceutical interventions like masking, distancing & ventilation, testing & isolation, and contact reduction as a function of compliance in the population. For realistic conditions, we find that it would be difficult to contain highly contagious SARS-CoV-2 variants by any individual measure. Instead, we show how multiple synergetic measures have to be combined to reduce the effective reproduction number (Re) below unity for different basic reproduction numbers ranging from the SARS-CoV-2 ancestral strain up to measles-like values (R0 = 3 to 18).

Face masks are well-established and effective preventive measures against the transmission of respiratory viruses and diseases, but their effectiveness for mitigating SARS-CoV-2 transmission is still under debate. We show that variations in mask efficacy can be explained by different regimes of virus abundance (virus-limited vs. virus-rich) and are related to population-average infection probability and reproduction number. Under virus-limited conditions, both surgical and FFP2/N95 masks are effective at reducing the virus spread, and universal masking with correctly applied FFP2/N95 masks can reduce infection probabilities by factors up to 100 or more (source control and wearer protection).

Masks are particularly effective in combination with synergetic measures like ventilation and distancing, which can reduce the viral load in breathing air by factors up to 10 or more and help maintaining virus-limited conditions. Extensive experimental studies, measurement data, numerical calculations, and practical experience show that window ventilation supported by exhaust fans (i.e. mechanical extract ventilation) is a simple and highly effective measure to increase air quality in classrooms. This approach can be used against the aerosol transmission of SARS-CoV-2. Mechanical extract ventilation (MEV) is very well suited not only for combating the COVID19 pandemic but also for sustainably ventilating schools in an energy-saving, resource-efficient, and climate-friendly manner.  Distributed extract ducts or hoods can be flexibly reused, removed and stored, or combined with other devices (e.g. CO2 sensors), which is easy due to the modular approach and low-cost materials (www.ventilationmainz.de).

The scientific findings and approaches outlined above can be used to design, communicate, and implement efficient strategies for mitigating the COVID-19 pandemic.

References:

Cheng et al., Face masks effectively limit the probability of SARS-CoV-2 transmission, Science, 372, 1439, 2021, https://doi.org/10.1126/science.abg6296 

Klimach et al., The Max Planck Institute for Chemistry mechanical extract ventilation (MPIC-MEV) system against aerosol transmission of COVID-19, Zenodo, 2021, https://doi.org/10.5281/zenodo.5802048  

Su et al., Synergetic measures to contain highly transmissible variants of SARS-CoV-2, medRxiv, 2021, https://doi.org/10.1101/2021.11.24.21266824

 

How to cite: Pöschl, U., Cheng, Y., Helleis, F., Klimach, T., and Su, H.: Efficiency and synergy of simple protective measures against COVID-19: Masks, ventilation and more, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1024, https://doi.org/10.5194/egusphere-egu22-1024, 2022.

EGU22-1890 | Presentations | ITS3.5/NP3.1

Possible effect of the particulate matter (PM) pollution on the Covid-19 spread in southern Europe 

Jean-Baptiste Renard, Gilles Delaunay, Eric Poincelet, and Jérémy Surcin

The time evolution of the Covid-19 death cases exhibits several distinct episodes since the start of the pandemic early in 2020. We propose an analysis of several Southern Europe regions that highlights how the beginning of each episode correlates with a strong increase in the concentrations level of pollution particulate matter smaller than 2.5 µm (PM2.5). Following the original PM2.5 spike, the evolution of the Covid-19 spread depends on the (partial) lockdowns and vaccinate races, thus the highest level of confidence in correlation can only be achieved when considering the beginning of each episode. The analysis is conducted for the 2020-2022 period at different locations: the Lombardy region (Italy), where we consider the mass concentrations measurements obtained by air quality monitoring stations (µg.m-3), and the cities of Paris (France), Lisbon (Portugal) and Madrid (Spain) using in-situ measurements counting particles (cm-3) in the 0.5-2.5 µm size range obtained with hundreds of mobile aerosol counters. The particle counting methodology is more suitable to evaluate the possible correlation between PM pollution and Covid-19 spread because we can better estimate the concentration of the submicronic particles compared with a mass concentration measurement methodology which would result in skewed results due to larger particles. Very fine particles of lesser than one micron go deeper inside the body and can even cross the alveolar-capillary barrier, subsequently attacking most of the organs through the bloodstream, potentially triggering a pejorative systemic inflammatory reaction. The rapidly increasing number of deaths attributed to the covid-19 starts between 2 weeks and one month after PM events that often occur in winter, which is coherent with the virus incubation time and its lethal outcome. We suggest that the pollution by the submicronic particles alters the pulmonary alveoli status and thus significantly increase the lungs susceptibility to the virus.

How to cite: Renard, J.-B., Delaunay, G., Poincelet, E., and Surcin, J.: Possible effect of the particulate matter (PM) pollution on the Covid-19 spread in southern Europe, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1890, https://doi.org/10.5194/egusphere-egu22-1890, 2022.

In the past two years, numerous advances have been made in the ability to predict the progress of COVID19 epidemics.  Basic forecasting of the health state of a population with respect to a given disease is based on the well-known family of SIR models (Susceptible Infected Recovered). The models used in epidemiology were based on deterministic behavior, so the epidemiological picture tomorrow depends exclusively on the numbers recorded today. The forecasting shortcomings of the deterministic SEIR models previously used in epidemiology were difficult to highlight before the advent of COVID19  because epidemiology was mostly not concerned with real-time forecasting.  From the first wave of COVID19 infections, the limitations of using deterministic models were immediately evident: to use them, one should know the exact status of the population and this knowledge was limited by the ability to process swabs. Futhermore, there is an intrinsic variability of the dynamics which depends on age, sex, characteristics of the virus, variants and vaccination status. 

Our main contribution was to define a SEIR model that assumes these parameters as constants could not be used for reliable predictions of COVID19 pandemis and that more realistic forecasts can be obtained by adding fluctuations in the model. The fluctuations in the dynamics of the virus induced by these factors do not just add variaiblity around the deterministic solution of the SIR models, the also introduce another timing of the pandemics which influence the epidemic peak. With our model we have found that even with a basic reprdocution number Rt less than 1 local epidemic peaks can occur that resume over a certain period of time. 

Introducing noise and uncertainty allows  to define a range of possible scenarios, instead of making a single prediction. This is what happens when we replace the deterministic approach, with a probabilistic approach. The probabilistic models used to predict the progress of the Covid-19 epidemic are conceptually very similar to those used by climatologists, to imagine future environmental scenarios based on the actions taken in the present.  As human beings we can intervene in both systems. Based on the choices we will make and the fluctuations of the systems, we can predict different responses. In the context of the emergency that we faced, the collaboration between different scientific fields was therefore fundamental, which, by comparing themselves, were able to provide more accurate answers. Furthermore, a close collaboration has arisen between epidemiologists and climatologists. A beautiful synergy that can give a great help to society in a difficult moment.

References

-Faranda, Castillo, Hulme, Jezequel, Lamb, Sato & Thompson (2020). Chaos: An Interdisciplinary Journal of Nonlinear Science30(5), 051107.

-Alberti & Faranda (2020).  Communications in Nonlinear Science and Numerical Simulation90, 105372.

-Faranda & Alberti (2020). Chaos: An Interdisciplinary Journal of Nonlinear Science30(11), 111101.

-Faranda, Alberti, Arutkin, Lembo, Lucarini. (2021).  Chaos: An Interdisciplinary Journal of Nonlinear Science31(4), 041105.

-Arutkin, Faranda, Alberti, & Vallée. (2021). Chaos: An Interdisciplinary Journal of Nonlinear Science31(10), 101107.

How to cite: Faranda, D.: How concepts and ideas from Statistical and Climate physics improve epidemiological modelling of the COVID 19 pandemics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2801, https://doi.org/10.5194/egusphere-egu22-2801, 2022.

EGU22-3690 | Presentations | ITS3.5/NP3.1

Improving the conservation of virus infectivity during airborne exposure experiments 

Ghislain Motos, Kalliopi Violaki, Aline Schaub, Shannon David, Tamar Kohn, and Athanasios Nenes

Recurrent epidemic outbreaks such as the seasonal flu and the ongoing COVID-19 are disastrous events to our societies both in terms of fatalities, social and educational structures, and financial losses. The difficulty to control COVID-19 spread in the last two years has brought evidence that basic mechanisms of transmission for such pathogens are still poorly understood.

             Three different routes of virus transmission are known: direct contact (e.g. through handshakes) and indirect contact through fomites; ballistic droplets produced by speaking, sneezing or coughing; and airborne transmission through aerosols which can also be produced by normal breathing. The latter route, which has long been ignored, even by the World Health Organization during the COVID-19 pandemics, now appears to play the predominant role in the spread of airborne diseases (e.g. Chen et al., 2020).

             Further scientific research thus needs to be conducted to better understand the mechanistic processes that lead to inactivate airborne viruses, as well as the environmental conditions which favour these processes. In addition to modelling and epidemiological studies, chamber experiments, where viruses are exposed to various types of humidity, temperature and/or UV dose, offer to simulate everyday life conditions for virus transmission. However, the current standard instrumental solutions for virus aerosolization to the chamber and sampling from it use high fluid forces and recirculation which can cause infectivity losses (Alsved et al., 2020) and also do not compare to the relevant production of airborne aerosol in the respiratory tract.

             In this study, we utilized two of the softest aerosolization and sampling techniques: the sparging liquid aerosol generator (SLAG, CH Technologies Inc., Westwood, NJ, USA), which forms aerosol from a liquid suspension by bubble bursting, thus mimicking natural aerosol formation in wet environments (e.g. the respiratory system but also lakes, sea, toilets, etc…); and the viable virus aerosol sampler (BioSpot-VIVAS, Aerosol Devices Inc., Fort Collins, CO, USA), which grows particle via water vapour condensation to gently collect them down to a few nanometres in size. We characterized these systems with particle sizers and biological analysers using non-pathogenic viruses such as bacteriophages suspended in surrogate lung fluid and artificial saliva. We compared the size distribution of produced aerosol from these suspensions against similar distributions generated with standard nebulizers, and assess the ability of these devices to produce aerosol that much more resembles that produced in human exhaled air. We also assess the conservation of viral infectivity with the VIVAS vs. conventional biosamplers.

 

Acknowledgment

 

We acknowledge the IVEA project in the framework of SINERGIA grant (Swiss National Science Foundation)

 

References

 

Alsved, M., Bourouiba, L., Duchaine, C., Löndahl, J., Marr, L. C., Parker, S. T., Prussin, A. J., and Thomas, R. J. (2020): Natural sources and experimental generation of bioaerosols: Challenges and perspectives, Aerosol Science and Technology, 54, 547–571.

Chen, W., Zhang, N., Wei, J., Yen, H.-L., and Li, Y. (2020): Short-range airborne route dominates exposure of respiratory infection during close contact, Building and Environment, 176, 106859.

How to cite: Motos, G., Violaki, K., Schaub, A., David, S., Kohn, T., and Nenes, A.: Improving the conservation of virus infectivity during airborne exposure experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3690, https://doi.org/10.5194/egusphere-egu22-3690, 2022.

EGU22-3936 | Presentations | ITS3.5/NP3.1

COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest 

Jean-Francois Oehler, Alexandre Leon, Sylvain Lucas, André Lusven, and Gildas Delachienne

COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest (Brittany, France)

Jean-François OEHLER1, Sylvain LUCAS1, Alexandre LEON1, André LUSVEN1, Gildas DELACHIENNE1

1Shom (Service Hydrographique et Océanographique de la Marine), Brest, France

 

Since September 2019, Shom’s Magnetic Station (SMS) has been deployed in the north neighbourhoods of the medium-sized city of Brest (Brittany, France, about 210,000 inhabitants). SMS continuously measures the intensity of the Earth Magnetic Field (EMF) with an absolute Overhauser sensor. The main goal of SMS is to derive local external variations of the EMF mainly due to solar activity. These variations consist of low and high parasitic frequencies in magnetic data and need to be corrected. Magnetic mobile stations or permanent observatories are usually installed in isolated areas, far from human activities and electromagnetic effects. It is clearly not the case for SMS, mainly for practical reasons of security, maintenance and data accessibility. However, despite its location in an urbanized area, SMS stays the far western reference station for processing marine magnetic data collected along the Atlantic and Channel coasts of France.

The corona pandemic has had unexpected consequences on the quality of measurements collected by SMS. For example, during the French first lockdown between March and May 2020, the noise level significantly decreased of about 50%. Average standard deviations computed on 1 Hz-time series over 1 min. periods fell from about 1.5 nT to 0.8 nT. This more stable behavior of SMS is clearly correlated with the drop of human activities and traffic in the city of Brest.

 

Keywords: Shom’s Magnetic Station (SMS), Earth Magnetic Field, COVID19.

 

How to cite: Oehler, J.-F., Leon, A., Lucas, S., Lusven, A., and Delachienne, G.: COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3936, https://doi.org/10.5194/egusphere-egu22-3936, 2022.

Economic activities and the associated emissions have significantly declined during the 2019 novel coronavirus (COVID-19) pandemic, which has created a natural experiment to assess the impact of the emitted precursor control policy on ozone (O3) pollution. In this study, we utilized comprehensive satellite, ground-level observations, and source-oriented chemical transport modeling to investigate the O3 variations during the COVID-19 pandemic in China. Here, we found that the significant elevated O3 in the North China Plain (40%) and Yangtze River Delta (35%) were mainly attributed to the enhanced atmospheric oxidation capacity (AOC) in these regions, associated with the meteorology and emission reduction during lockdown. Besides, O3 formation regimes shifted from VOC-limited regimes to NOx-limited and transition regimes with the decline of NOx during lockdown. We suggest that future O3 control policies should comprehensively consider the effects of AOC on the O3 elevation and coordinated regulations of the O3 precursor emissions.

How to cite: Wang, P., Zhu, S., and Zhang, H.: Comprehensive Insights Into O3 Changes During the COVID-19 From O3 Formation Regime and Atmospheric Oxidation Capacity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4170, https://doi.org/10.5194/egusphere-egu22-4170, 2022.

EGU22-5126 | Presentations | ITS3.5/NP3.1

Nature-based Solutions in actions: improving landscape connectivity during the COVID-19 

Yangzi Qiu, Ioulia Tchiguirinskaia, and Daniel Schertzer

In the last few decades, Nature-based Solutions (NBS) has become widely considered a sustainable development strategy for the development of urban environments. Assessing the performances of NBS is significant for understanding their efficiency in addressing a large range of natural and societal challenges, such as climate change, ecosystem services and human health. With the rapid onset of the COVID-19 pandemic, the inner relationship between humans and nature becomes apparent. However, the current catchment management mainly focuses on reducing hydro-meteorological and/or climatological risks and improving urban climate resilience. This single-dimensional management seems insufficient when facing epidemics, and multi-dimensional management (e.g., reduce zoonosis) is necessary. With this respect, policymakers pay more attention to NBS. Hence, it is significant to increase the connectivity of the landscape to improve the ecosystem services and reduce the health risks from COVID-19 with the help of NBS.

This study takes the Guyancourt catchment as an example. The selected catchment is located in the Southwest suburb of Paris, with a total area of around 5.2 km2. The ArcGIS software is used to assess the patterns of structural landscape connectivity, and the heterogeneous spatial distribution of current green spaces over the catchment is quantified with the help of the scale-independent indicator of fractal dimension. To quantify opportunities to increase landscape connectivity over the catchment, a least-cost path approach to map potential NBS links urban green spaces through vacant parcels, alleys, and smaller green spaces. Finally, to prioritise these potential NBS in multiscale, a new scale-independent indicator within the Universal Multifractal framework is proposed in this study.

The results indicated that NBS can effectively improve the connectivity of the landscape and has the potential to reduce the physical and mental risks caused by COVID-19. Overall, this study proposed a scale-independent approach for enhancing the multiscale connectivity of the NBS network in urban areas and providing quantitative suggestions for on-site redevelopment.

How to cite: Qiu, Y., Tchiguirinskaia, I., and Schertzer, D.: Nature-based Solutions in actions: improving landscape connectivity during the COVID-19, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5126, https://doi.org/10.5194/egusphere-egu22-5126, 2022.

EGU22-5150 | Presentations | ITS3.5/NP3.1

The associations between environmental factors and COVID-19: early evidence from China 

Xia Meng, Ye Yao, Weibing Wang, and Haidong Kan

The Coronavirus (COVID-19) epidemic, which was first reported in December 2019 in Wuhan, China, has been becoming one of the most important public health issues worldwide. Previous studies have shown the importance of weather variables and air pollution in the transmission or prognosis of infectious diseases, including, but not limited to, influenza and severe acute respiratory syndrome (SARS). In the early stage of the COVID-19 epidemic, there was intense debate and inconsistent results on whether environmental factors were associated with the spread and prognosis of COVID-19. Therefore, our team conducted a series studies to explore the associations between atmospheric parameters (temperature, humidity, UV radiation, particulate matters and nitrogen dioxygen) and the COVID-19 (transmission ability and prognosis) at the early stage of the COVID-19 epidemic with data in early 2020 in China and worldwide. Our results showed that meteorological conditions (temperature, humidity and UV radiation) had no significant associations with cumulative incidence rate or R0 of COVID-19 based on data from 224 Chinese cities, or based on data of 202 locations of 8 countries before March 9, 2020, suggesting that the spread ability of COVID-19 among public population would not significantly change with increasing temperature or UV radiation or changes of humidity. Moreover, we found that particulate matter pollution significantly associated with case fatality rate (CFR) of COVID-19 in 49 Chinese cities based on data before April 12, 2020, indicating that air pollution might exacerbate negative prognosis of COVID-19. Our studies provided an environmental perspective for the prevention and treatment of COVID-19.

How to cite: Meng, X., Yao, Y., Wang, W., and Kan, H.: The associations between environmental factors and COVID-19: early evidence from China, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5150, https://doi.org/10.5194/egusphere-egu22-5150, 2022.

EGU22-9213 | Presentations | ITS3.5/NP3.1

The Effects of COVID-19 Lockdown on Air Quality and Health in India and Finland 

Shubham Sharma, Behzad Heibati, Jagriti Suneja, and Sri Harsha Kota

The COVID-19 lockdowns worldwide provided a prospect to evaluate the impacts of restricted movements and emissions on air quality. In this study, we analyze the data obtained from the ground-based observation stations for six air pollutants (PM10, PM2.5, CO, NO2, O3 and SO2) and meteorological parameters from March 25th to May 31st in 22 cities representative of five regions of India and from March 16th to May 14th in 21 districts of Finland from 2017 to 2020. The NO2 concentrations dropped significantly during all phases apart from East India's exception during phase 1. O3 concentrations for all four phases in West India reduced significantly, with the highest during Phase 2 (~38%). The PM2.5 concentration nearly halved across India during all phases except South India, where a very marginal reduction (2%) was observed during Phase 4. SO2 (~31%) and CO (~41%) concentrations also reduced noticeably in South India and North India during all the phases. The air temperature rose by ~10% (average) during all the phases across India when compared to 2017-2019. In Finland, NO2 concentration reduced substantially in 2020. Apart from Phase 1, the concentrations of PM10 and PM2.5 reduced markedly in all the Phases across Finland. Also, O3 and SO2 concentrations stayed within the permissible limits in the study period for all four years but were highest in 2017 in Finland, while the sulfurous compounds (OSCs) levels increased during all the phases across Finland. The changes in the mobility patterns were also assessed and were observed to have reduced significantly during the lockdown. The benefits in the overall mortality due to the reduction in the concentrations of PM2.5 have also been estimated for India and Finland. Therefore, this research illustrates the effectiveness of lockdown and provides timely policy suggestions to the regulators to implement interventions to improve air quality.

How to cite: Sharma, S., Heibati, B., Suneja, J., and Kota, S. H.: The Effects of COVID-19 Lockdown on Air Quality and Health in India and Finland, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9213, https://doi.org/10.5194/egusphere-egu22-9213, 2022.

EGU22-9812 | Presentations | ITS3.5/NP3.1

Changes in Global Urban Air Quality due to Large Scale Disruptions of Activity 

Will Drysdale, Charlotte Stapleton, and James Lee

Since 2020, countries around the world have implemented various interventions in response to a global public health crisis. The interventions included restrictions on mobility, promotion of working from home and the limiting of local and international travel. These, along with other behavioural changes from people in response to the crisis affected various sources of air pollution, not least the transport sector. Whilst the method through which these changes were implemented is not something to be repeated, understanding the effects of the changes will help direct policy for further improving air quality. 

 

We analysed NOx, O3 and PM2.5 data from many 100s of air quality monitoring sites in urban areas around the world, and examined 2020 in relation to the previous 5 years. The data were examined alongside mobility metrics to contextualise the magnitude of changes and were viewed through the lens of World Health Organisation guidelines as a metric to link air quality changes with human health. Interestingly, reductions in polluting activities did not lead to wholesale improvements in air quality by all metrics due to the more complex processes involved with tropospheric O3 production.

 

How to cite: Drysdale, W., Stapleton, C., and Lee, J.: Changes in Global Urban Air Quality due to Large Scale Disruptions of Activity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9812, https://doi.org/10.5194/egusphere-egu22-9812, 2022.

EGU22-11475 | Presentations | ITS3.5/NP3.1

Scaling Dynamics of Growth Phenomena: from Epidemics to the Resilience of Urban Systems 

Ioulia Tchiguirinskaia and Daniel Schertzer

Defining optimal COVID-19 mitigation strategies remains at the top of public health agendas around the world. It requires a better understanding and refined modeling of the intrinsic dynamics of the epidemic. The common root of most models of epidemics is a cascade paradigm that dates to their emergence with Bernoulli and d’Alembert, which predated Richardson’s famous quatrain on the cascade of atmospheric dynamics. However, unlike other cascade processes, the characteristic times of a cascade of contacts that spread infection and the corresponding rates are believed to be independent on the cascade level. This assumption prevents having cascades of scaling contamination.

In this presentation, we theoretically argue and empirically demonstrate that the intrinsic dynamics of the COVID-19 epidemic during the phases of growth and decline, is a cascade with a rather universal scaling, the statistics of which differ significantly from those of an exponential process. This result first confirms the possibility of having a higher prevalence of intrinsic dynamics, resulting in slower but potentially longer phases of growth and decline. It also shows that a fairly simple transformation connects the two phases. It thus explains the frequent deviations of epidemic models rather aligned with exponential growth and it makes it possible to distinguish an epidemic decline from a change of scaling in the observed growth rates. The resulting variability across spatiotemporal scales is a major feature that requires alternative approaches with practical consequences for data analysis and modelling. We illustrate some of these consequences using the now famous database from the Johns Hopkins University Center for Systems Science and Engineering.

Due to the significant increase over time of available data, we are no longer limited to deterministic calculus. The non-negligible fluctuations with respect to a power-law can be easily explained within the framework of stochastic multiplicative cascades. These processes are exponentials of a stochastic generators Γ(t), whose stochastic differentiation remains quite close to the deterministic one, basically adding a supplementary term σdt to the differential of the generator. When the generator Γ(t) is Gaussian, σ is the “quadratic variation”. Extensions to Lévy stable generators, which are strongly non-Gaussian, have also been considered. To study the stochastic nature of the cascade generator, as well as how it respects the above-mentioned symmetry between the phases of growth and decline, we use the universal multifractals. They provide the appropriate framework for joint scaling analysis of vector-valued time series and for introducing location and other dependencies. This corresponds to enlarging the domain, on which the process and its generator are defined, as well as their co-domain, on which they are valued. These clarifications should make it possible to improve epidemic models and their statistical analysis.

More fundamentally, this study points out to a new class of stochastic multiplicative cascade models of epidemics in space and time, therefore not limited to compartments. By their generality, these results pave the way for a renewed approach to epidemics, and more generally growth phenomena, towards more resilient development and management of our urban systems.

How to cite: Tchiguirinskaia, I. and Schertzer, D.: Scaling Dynamics of Growth Phenomena: from Epidemics to the Resilience of Urban Systems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11475, https://doi.org/10.5194/egusphere-egu22-11475, 2022.

EGU22-11584 | Presentations | ITS3.5/NP3.1

Geophysicists facing Covid-19 

Daniel Schertzer, Vijay Dimri, and Klaus Fraedrich

There have been a series of sessions on the generic theme of “Covid-19 and Geosciences” on the occasion of AGU, AOGS and EGU conferences, since 2020 including during the first lockdown that required a very fast adaptation to unprecedented health measures. We think it is interesting and useful to have an overview of these sessions and try to capture what could be the lessons to learn.

To our knowledge, the very first such session was the Great e-Debate “Epidemics, Urban Systems and Geosciences” (https://hmco.enpc.fr/news-and-events/great-e-debate-epidemics-urban-systems-and-geosciences-invitations-and-replays/). It was virtually organised with the help of the UNESCO UniTwin CS-DC (Complex Systems Digital Campus) thanks to its expertise in organising e-conferences long before the pandemic and the first health measures. This would not have been possible without the strong personal involvement of its chair Paul Bourgine. It was held on Monday 4th May on the occasion of the 2020 EGU conference, which became virtual under the title “EGU2020: Sharing Geoscience Online” (4-8 May 2020). The Great e-Debate did not succeed in being granted as an official session of this conference, despite the fact that the technology used (Blue Button) by the Great e-Debate was much more advanced. Nevertheless, it was clearly an extension of the EGU session ITS2.10 / NP3.3: “Urban Geoscience Complexity: Transdisciplinarity for the Urban Transition”. 

Thanks to a later venue (7-11 December 2020) and the existence of a GeoHealth section of the AGU, the organisation of several regular sessions for the 2020 Fall Meeting was easier. For EGU 2021 (19-30 April 2021), a sub-part of the  inter- transdisciplinary sessions ITS1 “Geosciences and health during the Covid pandemic”, a Union Session US “Post-Covid Geosciences” and a Townhall meeting TM10 “Covid-19 and other epidemics: engagement of the geoscience communities” were organised. A brief of the special session SS02 “Covid-19 and Geoscience” of the (virtual) 18th Annual Meeting of AOGS (1-6 August 2021) is included in the proceedings of this conference (in press). 

We will review materials generated by these sessions that rather show a shift from a focus on the broad range of scientific responses to the pandemic, to which geoscientists could contribute with their specific expertise (from data collection to theoretical modelling), to an expression of concerns about the broad impacts on the geophysical communities that appear to be increasingly long-term and constitute a major transformation of community functioning (e.g., again data collection, knowledge transfer).

How to cite: Schertzer, D., Dimri, V., and Fraedrich, K.: Geophysicists facing Covid-19, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11584, https://doi.org/10.5194/egusphere-egu22-11584, 2022.

EGU22-11747 | Presentations | ITS3.5/NP3.1

To act or not to act. Predictability of intervention and non-intervention in health and environment 

Michalis Chiotinis, Panayiotis Dimitriadis, Theano Illiopoulou, Nikos Mamassis, and Demetris Koutsoyiannis

The COVID-19 pandemic has brought forth the question of the need for draconian interventions before concrete evidence for their need and efficacy is presented. Such interventions could be critical if necessary for avoiding threats, or a threat in themselves if harms caused by the intervention are significant.

The interdisciplinary nature of such issues as well as the unpredictability of various local responses considering their potential for global impact further complicate the question.

The study aims to review the available evidence and discuss the problem of weighting the predictability of interventions vis-à-vis their intended results against the limits of knowability regarding complex non-linear systems and thus the predictability in non-interventionist approaches.

How to cite: Chiotinis, M., Dimitriadis, P., Illiopoulou, T., Mamassis, N., and Koutsoyiannis, D.: To act or not to act. Predictability of intervention and non-intervention in health and environment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11747, https://doi.org/10.5194/egusphere-egu22-11747, 2022.

EGU22-12302 | Presentations | ITS3.5/NP3.1

COVID-19 waves: intrinsic and extrinsic spatio-temporal dynamics over Italy 

Tommaso Alberti and Davide Faranda

COVID-19 waves, mostly due to variants, still require timely efforts from governments based on real-time forecasts of the epidemics via dynamical and statistical models. Nevertheless, less attention has been paid in investigating and characterizing the intrinsic and extrinsic spatio-temporal dynamics of the epidemic spread. The large amount of data, both in terms of data points and observables, allows us to perform a detailed characteristic of the epidemic waves and their relation with different sources as testing capabilities, vaccination policies, and restriction measures.

By taking as a case-study the epidemic evolution of COVID-19 across Italian regions we perform the Hilbert-Huang Transform (HHT) analysis to investigate its spatio-temporal dynamics. We identified a similar number of temporal components within all Italian regions that can be linked to both intrisic and extrinsic source mechanisms as the efficiency of restriction measures, testing strategies and performances, and vaccination policies. We also identified mutual scale-dependent relations within different regions, thus suggesting an additional source mechanisms related to the delayed spread of the epidemics due to travels and movements of people. Our results are also extremely helpful for providing long term extrapolation of epidemics counts by taking into account both the intrinsically and the extrinsically non-linear nature of the underlying dynamics. 

How to cite: Alberti, T. and Faranda, D.: COVID-19 waves: intrinsic and extrinsic spatio-temporal dynamics over Italy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12302, https://doi.org/10.5194/egusphere-egu22-12302, 2022.

Black carbon (BC) not only warms the atmosphere but also affects human health. The nationwide lockdown due to the COVID-19 pandemic led to a major reduction in human activity during the past thirty years. Here, the concentration of BC in the urban, urban-industry, suburb, and rural areas of a megacity Hangzhou were monitored using a multi-wavelength Aethalometer to estimate the impact of the COVID-19 lockdown on BC emissions. The citywide BC decreased by 44% from 2.30 μg/m3 to 1.29 μg/m3 following the COVID-19 lockdown period. The source apportionment based on the Aethalometer model shows that vehicle emission reduction responded to BC decline in the urban area and biomass burning in rural areas around the megacity had a regional contribution of BC. We highlight that the emission controls of vehicles in urban areas and biomass burning in rural areas should be more efficient in reducing BC in the megacity Hangzhou.

How to cite: Li, W. and Xu, L.: Responses of concentration and sources of black carbon in a megacity during the COVID-19 pandemic, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12907, https://doi.org/10.5194/egusphere-egu22-12907, 2022.

For many of us, the Covid-19 pandemic brought long-time scientific interest in epidemiology to the point of involvement. An important aspect of the evolution of acute respiratory epidemics is their seasonal character. Our toolkit for handling seasonal phenomena in the geosciences has increased in the last dozen years or so with the development and application of concepts and methods from the theory of nonautonomous and random dynamical systems (NDSs and RDSs). In this talk, I will briefly:

  • Introduce some elements of these two closely related theories.

  • Illustrate the two with an application to seasonal effects within a chaotic model of the El

    Niño–Southern Oscillation (ENSO).

  • Introduce to a geoscientific audience a simple epidemiological “box” model of the

    Susceptible–Exposed–Infectious–Recovered (SEIR) type.

  • Summarize NDS results for a chaotic SEIR model with seasonal effects.

  • Mention the utility of data assimilation (DA) tools in the parameter identification and

    prediction of an epidemic’s evolution

    References

    - Chekroun, M D, Ghil M, Neelin J D (2018) Pullback attractor crisis in a delay differential ENSO model, in Nonlinear Advances in Geosciences, A. Tsonis (Ed.), Springer, pp. 1–33, doi: 10.1007/978-3-319-58895-7

    - Crisan D, Ghil, M (2022) Asymptotic behavior of the forecast–assimilation process with unstable dynamics, Chaos, in preparation

    - Faranda D, Castillo I P, Hulme O, Jezequel A, Lamb J S, Sato Y, Thompson E L (2020) Asymptotic estimates of SARS-CoV-2 infection counts and their sensitivity to stochastic perturbation<? Chaos, 30(5): 051107, doi: 10.1063/5.0009454

    - Ghil, M (2019) A century of nonlinearity in the geosciences. Earth & Space Science 6:1007–1042, doi:10.1029/2019EA000599

    - Kovács, T (2020) How can contemporary climate research help understand epidemic dynamics? Ensemble approach and snapshot attractors. J. Roy. Soc. Interface, 17(173):20200648, doi: 10.1098/rsif.2020.0648

How to cite: Ghil, M.: Time-dependent forcing in the geosciences and in epidemiology, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13522, https://doi.org/10.5194/egusphere-egu22-13522, 2022.

Standard epidemic models based on compartmental differential equations are investigated under continuous parameter change as external forcing. We show that seasonal modulation of the contact parameter superimposed upon a monotonic decay needs a different description from that of the standard chaotic dynamics. The concept of snapshot attractors and their natural distribution has been adopted from the field of the latest climate change research. This shows the importance of the finite-time chaotic effect and ensemble interpretation while investigating the spread of a disease. By defining statistical measures over the ensemble, we can interpret the internal variability of the
epidemic as the onset of complex dynamics—even for those values of contact parameters where originally regular behaviour is expected. We argue that anomalous outbreaks of the infectious class cannot die out until transient chaos is presented in the system. Nevertheless, this fact becomes apparent by using an ensemble approach rather than a single trajectory representation. These findings are applicable generally in explicitly time-dependent epidemic systems regardless of parameter values and time scales.

How to cite: Kovács, T.: How can contemporary climate research help understand epidemic dynamics? -- Ensemble approach and snapshot attractors, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13534, https://doi.org/10.5194/egusphere-egu22-13534, 2022.

Most of the largest volcanic activity in the world occurs in remote places as deep oceans or poorly monitored oceanic islands. Thus, our capacity of monitoring volcanoes is limited to remote sensing and global geophysical observations. However, the rapid estimation of volcanic eruption parameters is needed for scientific understanding of the eruptive process and rapid hazard estimation. We first a method to rapidly identify large volcanic explosions, based on analysis of seismic data. The method automatically detects and locate long period (0.01-0.03Hz) signals associated with physical processes close to the Earth surface, by analyzing surface waves recorded at global seismic stations. With this methodology, we promptly detect the January 15, 2022 Hunga Tonga eruption, among many other signals associated with known and unknown processes. We further use the waves generate by the Hunga Tonga volcanic explosion and estimate important first-order parameters of the eruption (Force spectrum, impulse). We then relate the estimated parameters with the volcanic explosivity index (VEI). Our estimate of VEI~6, indicate how the Hunga Tonga eruption is among the largest volcanic activity ever recorded with modern geophysical instrumentation, and can provide new insights about the physics of large volcanoes.

How to cite: Poli, P. and Shapiro, N.: Seismological characterization of dynamics parameter of the Hunga Tonga explosion from teleseismic waves, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13572, https://doi.org/10.5194/egusphere-egu22-13572, 2022.

EGU22-13576 | Presentations | ITS3.6/SM1.2

The 2022 Tonga tsunami in the marginal seas of the northwestern Pacific Ocean 

Elizaveta Tsukanova, Alisa Medvedeva, Igor Medvedev, and Tatiana Ivelskaya

The Hunga Tonga volcanic eruption on 15 January 2022 created a tsunami affecting the entire Pacific Ocean. The observed tsunami was found to have a dual mechanism and was caused both by the wave incoming from the source area and by an atmospheric wave propagating with the speed of sound. The tsunami was clearly recorded in the marginal seas of the northwestern Pacific, including the Sea of Japan, the Sea of Okhotsk and the Bering Sea, in particular on the coasts of Kamchatka, the Kuril Islands and the Aleutian Islands. We examined high-resolution records (1-min sampling) of about 50 tide gauges and 15 air pressure stations in these seas for the period of 14-17 January 2022. On the Russian coast, the highest wave with a trough-to-crest wave height of 1.4 m was recorded at Vodopadnaya, on the southeastern Kamchatka Peninsula; on the coasts of the Aleutian Islands the tsunami waves were even higher, up to 2 m. Based on numerical modelling we estimated the arrival time of the gravitational tsunami waves from the source. We revealed that the character of sea level oscillations for most of the stations evidently changed before these waves arrived. A comparative analysis of sea level and atmospheric data indicated that these changes were probably caused by the atmospheric waves generated by the volcanic eruption.

How to cite: Tsukanova, E., Medvedeva, A., Medvedev, I., and Ivelskaya, T.: The 2022 Tonga tsunami in the marginal seas of the northwestern Pacific Ocean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13576, https://doi.org/10.5194/egusphere-egu22-13576, 2022.

EGU22-13578 | Presentations | ITS3.6/SM1.2

Global ionospheric signature of the tsunami triggered by the 2022 Hunga Tonga volcanic eruption 

Edhah Munaibari, Lucie Rolland, Anthony Sladen, and Bertrand Delouis

The Hunga Tonga volcanic eruption on Jan. 15, 2022 released a highly energetic atmospheric pressure wave that was observed all around the globe in different types of measurements (e.g., barometers and infrasound sensors, satellites images, ionospheric measurements, etc.). In addition, the eruption triggered a meteo-tsunami followed by a series of tsunami waves. Tide gauges across the Pacific Ocean, the Atlantic and the Indian oceans recorded significant sea-level changes related to the primary eruption.

We focus our presentation on the imprint of tsunami waves on the ionosphere. We make use of an extensive collection of Global Navigation Satellites Systems (GNSS) data recorded by multi-constellation GNSS receivers across the Pacific Ocean and beyond. The observation of tsunami-induced ionospheric signatures is made possible by the efficient coupling of tsunami waves with the surrounding atmosphere and the generation of internal gravity waves (IGWs). With the help of GNSS systems (Beidou, GPS, Galileo, GLONASS, QZSS), ionospheric disturbances can be monitored and observed by utilizing the Total Electron Content (TEC) derived from the delay that the ionosphere imposes in the electromagnetic signals transmitted by the GNSS satellites. We identify and characterize the ionospheric TEC signatures following the passage of the Tonga tsunami. We investigate the influence of known key ambient parameters such as the local geomagnetic field, the tsunami propagation direction, and the distance to the tsunami source on the amplitude of the observed signatures. Moreover, we correlate the detected tsunami-induced TEC signatures with sea level measurements to assess their tsunami origins. And we contrast the identified TEC signatures in the Pacific Ocean with their analogs induced by the tsunami triggered by the Mar. 4, 2021 8.1 Mw Kermadec Islands earthquake. Both events took place relatively in the same geographical region, with the former being less complex (no meteo-tsunami, shorter duration, and about one order of magnitude smaller in amplitude). Finally, we provide estimations of the tsunami amplitude at the ocean level in the areas crossed by GNSS radio signals, some of them not covered by open ocean sea-level sensors (DART buoys).

How to cite: Munaibari, E., Rolland, L., Sladen, A., and Delouis, B.: Global ionospheric signature of the tsunami triggered by the 2022 Hunga Tonga volcanic eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13578, https://doi.org/10.5194/egusphere-egu22-13578, 2022.

EGU22-13579 | Presentations | ITS3.6/SM1.2

Modeling low-frequency Rayleigh waves excited by the Jan. 15, 2022 eruption of Hunga Tonga-Hunga Ha’apai volcano 

Shenjian Zhang, Rongjiang Wang, and Torsten Dahm

Low-frequency seismic energy whose spectrum is centered at certain narrow bands has been detected after violent volcano eruptions. Normal-mode analysis related this signal to the resonances between the atmosphere and the solid earth.
After the powerful eruption of Hunga Tonga-Hunga Ha’apai volcano on Jan. 15, 2022, this low-frequency signal is found on long period and very long period seismometers worldwide. The amplitude spectrum of the signal for this eruption consists of three clear peaks locating at 3.72, 4.61 and 6.07 mHz, instead of two distinct bands for previous cases. The spectrogram analysis shows that this low-frequency energy lasts for several hour and is independent of air wave arrival, while the cross-correlation result confirms that the signal travels as Rayleigh waves with a speed of 3.68 km/s. In this study, we summarize our findings on the observation, and show our synthetic waveforms to provide a possible explanation for the source of this signal. We suggest that the atmospheric oscillations near the volcano excited by the eruption act as an enduring external force on the surface of the solid earth, and produce Rayleigh waves propagating all over the world.

How to cite: Zhang, S., Wang, R., and Dahm, T.: Modeling low-frequency Rayleigh waves excited by the Jan. 15, 2022 eruption of Hunga Tonga-Hunga Ha’apai volcano, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13579, https://doi.org/10.5194/egusphere-egu22-13579, 2022.

The population and built infrastructure of the Kingdom of Tonga are highly exposed to ocean- and climate-related coastal hazards. The archipelago was impacted on January 15, 2022, by a destructive tsunami caused by the Hunga Tonga-Hunga Ha'apai submarine volcanic eruption. Weeks later, several islands were still cut off from the world, this situation was made worse by covid-19-related international lockdowns and no precise idea of the magnitude and pattern of destruction. Like in most Pacific islands, the Kingdom of Tonga lacks an accurate population and infrastructure database. The occurrence of events such as this in remote island communities highlights the need for (1) precisely knowing the distribution of residential and public buildings, (2) evaluating what proportion of those would be vulnerable to a tsunami depending on various run-up scenarios, (3) providing tools to the local authorities for elaborating efficient evacuation plans and securing essential services outside the hazard zones. Using a GIS-based dasymetric mapping method previously tested in New Caledonia for assessing, calibrating, and mapping population distribution at high resolution, we produce maps that combine population clusters, critical elevation contours, and the precise location of essential services (hospitals, airports, shopping centers, etc.), backed up by before–after imagery accessible online. Results show that 62% of the population on the main island of Tonga lives in well-defined clusters between sea level and the 15 m elevation contour, which is also the value of the maximum tsunami run-up reported on this occasion. The patterns of vulnerability thus obtained for each island in the archipelago, are further compared to the destruction patterns recorded after the earthquake-related 2009 tsunami in Tonga, thereby also allowing us to rank exposure and potential for cumulative damage as a function of tsunami cause and source-area. By relying on low-cost tools and incomplete datasets for rapid implementation in the context of natural disasters, this approach can assist in (1) guiding emergency rescue targets, and (2) elaborating future land-use planning priorities for disaster risk-reduction purposes. By involving an interactive mapping tool to be shared with the resident population, the approach aims to enhance disaster-preparedness and resilience. It works for all types of natural hazards and is easily transferable to other insular settings.

How to cite: Thomas, B. E. O., Roger, J., and Gunnell, Y.: A rapid, low-cost, high-resolution, map-based assessment of the January 15, 2022 tsunami impact on population and buildings in the Kingdom of Tonga, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13580, https://doi.org/10.5194/egusphere-egu22-13580, 2022.

The phreatic eruption of Hunga-Tonga on January 15, 2022 was so energetic that it excited globe circling air-waves. These wave packets with a dominant period of 30 minutes have been observed in single barograms even after completing at least  four orbits or 6 days after the eruption. Constructive and destructive interference between waves that have left the source region in opposite direction lead to the emergence of standing pressure waves: normal modes of the atmosphere.

 

We report on individual modes of spherical harmonic degree between 30 and 80 covering the frequency bend from 0.2 mHz to 0.8 mHz. These modes belong to the Lamb wave equivalent modes with a phase velocity of 313 m/s.  They are trapped to the Earth’s surface, decay exponentially with altitude and their particle motion is longitudinal and horizontal. The restoring force is dominated by incompressibility. 

 

In the frequency band where we observe these modes the mode branches do not cross with mode branches of the solid Earth. Hence we do not expect any significant coupling with seismic normal modes of the solid Earth. Such a crossing occurs at 3.7mHz and aboce.

 

How to cite: Widmer-Schnidrig, R.: Observation of acoustic normal modes of the atmosphere after the 2022 Hunga-Tonga eruption., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13581, https://doi.org/10.5194/egusphere-egu22-13581, 2022.

The explosive eruption of the Hunga Tonga-Hunga Ha’apai volcano on 15th of January 2022 impacted the Earth, its oceans and atmosphere on a global scale. Witnesses report an audible “bang” as a result of the event in distances of up to several thousand kilometers. With infrasound sensors this sound wave can be detected where the frequency content or the amplitude of the signal renders the event inaudible to the human ear. Infrasound sensors are distributed globally, a selection of these stations upload their data in real time to publicly available servers. In combination with Open Source libraries such as obspy or scipy it is possible to use these data sources to observe the atmospheric disturbances caused by the eruption on a global scale in near real time. With a minimum of data processing not only the first arrival peak of the atmospheric lamb wave can be identified at most stations but also further passes of the wave as it propagates around the planet several times. Having large amounts of publicly available data is crucial in that process. New data chunks can be analyzed and displayed immediately while the signal is still ongoing because data access requests are not required. Additionally, having immediate access to a large dataset allows for big data analysis and reduces the necessity to consider outliers at individual stations and increases the chance to identify the signal after multiple days when overall signal to noise ratios have decreased.

How to cite: Eckel, F., Garcés, M., and Colet, M.: The 15 January 2022 Hunga Tonga event: Using Open Source to observe a volcanic eruption on a global scale in near real time, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13582, https://doi.org/10.5194/egusphere-egu22-13582, 2022.

EGU22-13583 | Presentations | ITS3.6/SM1.2 | Highlight

Satellite observations and modeling of the 2022 Hunga Tonga-Hunga Ha'apai eruption 

Simon Carn, Benjamin Andrews, Valentina Aquila, Christina Cauley, Peter Colarco, Josef Dufek, Tobias Fischer, Lexi Kenis, Nickolay Krotkov, Can Li, Larry Mastin, Paul Newman, and Paul Wallace

The 15 January 2022 eruption of the submarine Hunga Tonga-Hunga Ha'apai (HTHH) volcano (Tonga) ranks among the largest volcanic explosions of the satellite remote sensing era, and perhaps the last century. It shares many characteristics with the 1883 Krakatau eruption (Indonesia), including atmospheric pressure waves and tsunamis, and the phreatomagmatic interaction of magma and seawater likely played a major role in the dynamics of both events. A portion of the HTHH eruption column rose to lower mesospheric altitudes (~55 km) and the umbrella cloud extent (~500 km diameter at ~30-35 km altitude) rivalled that of the 1991 Pinatubo eruption, indicative of very high mass eruption rates. However, sulfur dioxide (SO2) emissions measured in the HTHH volcanic cloud (~0.4 Tg) were significantly lower than the post-Pinatubo SO2 loading (~10–15 Tg SO2), and on this basis we would expect minimal climate impacts from the HTHH event. Yet, in the aftermath of the eruption satellite observations show a persistent stratospheric aerosol layer with the characteristics of sulfate aerosol, along with a large stratospheric water vapor anomaly. At the time of writing, the origin, composition and eventual impacts of this stratospheric gas and aerosol veil are unclear. We present the preliminary results of a multi-disciplinary approach to understanding the HTHH eruption, including 1D- and 3D-modeling of the eruption column coupled to a 3D atmospheric general circulation model (NASA’s GEOS-5 model), volatile mass balance considerations involving potential magmatic, seawater and atmospheric volatile and aerosol sources, and an extensive suite of satellite observations. Analysis of the HTHH eruption will provide new insight into the dynamics and atmospheric impacts of large, shallow submarine eruptions. Such eruptions have likely occurred throughout Earth’s history but have never been observed with modern instrumentation.

How to cite: Carn, S., Andrews, B., Aquila, V., Cauley, C., Colarco, P., Dufek, J., Fischer, T., Kenis, L., Krotkov, N., Li, C., Mastin, L., Newman, P., and Wallace, P.: Satellite observations and modeling of the 2022 Hunga Tonga-Hunga Ha'apai eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13583, https://doi.org/10.5194/egusphere-egu22-13583, 2022.

EGU22-13584 | Presentations | ITS3.6/SM1.2 | Highlight

The 15 January 2022 Hunga eruption, Tonga – first petrographic and geochemical results 

Shane Cronin, Marco Brenna, Taaniela Kula, Ingrid Ukstins, David Adams, Jie Wu, Joa Paredes Marino, Geoff Kilgour, Graham Leonard, James White, Simon Barker, and Darren Gravley

The phreatoplinan eruption of the shallow submarine Hunga Volcano Tonga formed global air-pressure waves, regional tsunami and an up to 55 km-high eruption column. Despite its large explosive magnitude, the magma erupted were similar to past compositions, and comprised crystal poor (<8 wt% total; plag>cpx>opx) andesite with ~57-63 wt% silica glass. Low magnitude Surtseyan eruptions in 2009-2015 formed from small pockets of andesite that ascended slowly, resulting in high microphenocryst and microlite contents. Large eruptions, including events in ~AD200 and AD1100 and the 2022 event drew magma rapidly from a ~5-7 km deep mid-crustal reservoir. Rapid decompression and quenching (augmented by magma-water interaction) records the heterogeneity of the reservoir, with mingled glass textures and cryptic mixing of subtly different melts. The 2022 feldspar phenocrysts show more mafic melt inclusion compositions than host glass, clear uniform cores and thin rims evidencing ~1 month-long changes caused by decompression, rise and internal mingling of subtlety different melts. CPX phenocrysts show uniform cores a variety of more mafic and similar melt inclusions to the bulk glass, and thin overgrowth rims reflecting only decompression and mingling. Lithic fragments (<8wt%) include common hydrothermal minerals (sulphides, quartz etc). Without evidence of a mafic trigger, or crystalisation induced overpressures, this extremely violent eruption was triggered by top-down processes that led to rapid exhumation/decompression of magma and very efficient explosive magma-water interaction. This could include any, or all of: flank collapse; hydrothermal seal fracturing and ingress of water into the upper magma system and caldera collapse. Subsequent earthquakes suggest that the crustal magma system was rapidly recharged in the days following the eruption.

How to cite: Cronin, S., Brenna, M., Kula, T., Ukstins, I., Adams, D., Wu, J., Paredes Marino, J., Kilgour, G., Leonard, G., White, J., Barker, S., and Gravley, D.: The 15 January 2022 Hunga eruption, Tonga – first petrographic and geochemical results, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13584, https://doi.org/10.5194/egusphere-egu22-13584, 2022.

EGU22-13585 | Presentations | ITS3.6/SM1.2

Hunga-Tonga-Hunga-Ha’apai Jan 15, 2022 eruption: Assembly of heterogeneous magma sources recorded in melt inclusions from plagioclase, clinopyroxene and orthopyroxene. 

Ingrid Ukstins, Shane Cronin, David Adams, Jie Wu, Joali Paredes Marino, Marco Brenna, Ian Smith, and Isabelle Brooks-Clarke

The 15 Jan 2022 eruption of Hunga-Tonga-Hunga-Ha’apai was the largest explosive volcanic event in the last 30 years. These islands represent the subaerially exposed summit of the Hunga Volcano, merged into a single land mass during the most recent eruption in 2014-2015. The 2022 eruption likely represents a 1-in-1000-year event for the Hunga Volcano, with the previous large-magnitude eruption occurring in ~1100 CE during a series of caldera-forming events. The 2022 erupted magma is plagioclase-, orthopyroxene- and clinopyroxene-bearing basaltic andesite to andesite dominated by blocky, poorly vesicular glassy ash with lesser amounts of vesicular pumiceous ash and fine lapilli. Melt Inclusions (MIs) hosted in plagioclase, clinopyroxene and orthopyroxene are abundant and glassy, some displaying shrinkage bubbles, with no evidence of secondary crystallization along the walls or within the MI glass. The groundmass glass and MI in the three main phenocryst phases were analysed for major, trace and volatile element concentrations to enable identification of magmatic sources and to better constrain processes happening at depth. Preliminary data indicate that plagioclase phenocrysts range from An93 to An78, and MI range from 54.1 to 58.7 wt % SiO2, with MgO from 2.5 to 5.3 wt %. Clinopyroxene phenocrysts range from En42 to En50, and MI range from 51.6 to 65.1 wt % SiO2, with MgO from 1.1 to 5.7 wt %. Orthopyroxene phenocrysts range from En68 to En77, and MI range from 55.7 to 59.6 wt % SiO2, with MgO from 2.5 to 5.3 wt %. Clinopyroxene MI span the full range of SiO2 compositions observed from the Hunga Volcano, from the host 2022 event (SiO2: ~57.5 wt %), the 1100 CE event (SiO2: ~60 wt %), the 2014-2015 event (SiO2: ~60.5 wt %), and the most evolved 2009 event (SiO2: ~63 wt %) and extend an additional ~4 wt % SiO2 to more mafic compositions. Orthopyroxene MI most closely resemble the 1100 CE event and the average groundmass glass compositions of the 2022 event. Plagioclase MI overlap the least silicic compositions observed in the 2022 groundmass glass (58.6 wt% SiO2) and extend down to 54 wt % SiO2, overlapping the main field of clinopyroxene MI. Both plagioclase and clinopyroxene MI tend to show higher MgO as compared to the 2022 groundmass glass at the same SiO2 concentration, whereas orthopyroxene shows lower MgO than the groundmass glass. SO3 in MI ranges up to 1600 ppm, significantly higher than the 2022 groundmass glass which averages 200 ppm, with both plagioclase and clinopyroxene MI preserving the highest observed concentrations. In contrast, Cl concentrations in MI extend to 2000 ppm, with the highest values in orthopyroxene and clinopyroxene, and plagioclase MI are lower and generally overlie the main groundmass glass concentrations (~1300 ppm). F was below detection limits. We postulate that clinopyroxene crystals reflect a more primitive basaltic andesite magma, whereas orthopyroxene crystals were likely derived from the magmatic remnants of the 2009 and 2014/2015 events in the upper magma system, and plagioclase crystals were sourced from the full range of magma sources.

How to cite: Ukstins, I., Cronin, S., Adams, D., Wu, J., Paredes Marino, J., Brenna, M., Smith, I., and Brooks-Clarke, I.: Hunga-Tonga-Hunga-Ha’apai Jan 15, 2022 eruption: Assembly of heterogeneous magma sources recorded in melt inclusions from plagioclase, clinopyroxene and orthopyroxene., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13585, https://doi.org/10.5194/egusphere-egu22-13585, 2022.

EGU22-13586 | Presentations | ITS3.6/SM1.2 | Highlight

Post-2015 caldera morphology of the Hunga Tonga-Hunga Ha’apai caldera, Tonga, through drone photogrammetry and summit area bathymetry 

Sönke Stern, Shane Cronin, Marta Ribo, Simon Barker, Marco Brenna, Ian E. M. Smith, Murray Ford, Taaniela Kula, and Rennie Vaiomounga

In December 2014, eruptions began from a submarine vent between the islands of Hunga Tonga and Hunga Ha’apai, 65 km north of Tongatapu, Tonga. The “Hungas” represent small NW and NE remnants of the flanks of a larger edifice, with a ~5 km-diameter collapse caldera south of them. The 2014/15 Surtseyan explosive eruptions lasted for 5 weeks, building a 140 m-high tuff ring.

Deposits on Hunga Ha’apai and tephra fall on Tongatapu record two very large magnitude eruptions producing local pyroclastic density currents and tephra falls of >10 cm-thick >65 km away. These likely derive from the central edifice/caldera. The 2022 eruption produced slightly less tephra fall, but an extremely large explosive event, with regional tsunami indicating substantive topographic change.

Here we report the bathymetric details of the caldera as of November 2015. A multibeam sounder (WASSP) was used to mapping the shallow (<250 m) seafloor concentrating on the edges of the Hunga caldera. These results were combined with an aerial survey of the 2015 tuff cone, using a combination of drone photogrammetry and real-time kinematic GPS surveys. The bathymetry reveals that previous historical eruptions, including 1988 and 2009, and likely many other recent unknown produced a series of well-preserved cones around the rim of the caldera. Aside from the raised ground in the northern caldera produced by the 2009 and 2014/15 eruptions, the southern portion is also elevated to within a few m below sea level, with reefs present. During the 2015 visit, uplifted fresh coral showed that inflation was ongoing and that the caldera was likely in the process of resurgence.

Much of Hunga Tonga and the 2014/2015 cone was destroyed in the 2022 eruptions, with Hunga Ha’apai intact, but dropping vertically by ~10-15 m. The violence of the 2022 eruption was likely augmented by either caldera collapse or flank collapse from the upper edifice, rapidly unroofing the andesitic magma system and enabling efficient water ingress.

This data provides an essential base layer for assessing changes on the ocean floor, especially to determine any caldera or upper-flank changes. Understanding these changes is crucial for future forecasting future volcanic hazards at Hunga and other nearby large submarine volcanoes.

How to cite: Stern, S., Cronin, S., Ribo, M., Barker, S., Brenna, M., Smith, I. E. M., Ford, M., Kula, T., and Vaiomounga, R.: Post-2015 caldera morphology of the Hunga Tonga-Hunga Ha’apai caldera, Tonga, through drone photogrammetry and summit area bathymetry, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13586, https://doi.org/10.5194/egusphere-egu22-13586, 2022.

EGU22-13587 | Presentations | ITS3.6/SM1.2

Understanding fragmentation mechanism(s) during the 15 January 2022 Hunga Volcano (Tonga) eruption through particle characteristics 

Joali Paredes-Mariño, James White, Tobias Dürig, Rachel Baxter, Taaniela Kula, Shane Cronin, Ingrid Ukstins, Jie Wu, David Adams, Marco Brenna, and Isabelle Brooks-Clarke

The January 2022 eruption of Hunga Volcano, Tonga is likely the most explosive mafic eruption yet documented. It exhibited dynamics of ash plume expansion and atmospheric pressure waves unlike anything seen before. This is remarkable considering that it erupted crystal-poor and microlite-poor andesitic magma (57-63 wt% silica glass). The climactic phase produced an eruptive column of at least 39 km in height, however, the ash volume appears anomalously small for the explosive magnitude. Ash from nine different sites across the Kingdom of Tonga were analyzed for textural and morphological properties and grain size distribution. The tephra comprises light pumice (16%), dark pumice (44%), glassy microlite-rich grains (25%), lithics (7%) and free-crystals (Pl, Cpx, Opx) (8%). Specific gravity of particles range from 0.4 to ~2.5. Secondary electron images show that pumices have a variable vesicularity, from dense glassy blocky particles; glassy particles with isolated vesicles and weakly deformed, thick vesicle walls; and a smaller percentage of microvesicular pumices, coated in finer particles. The general characteristics imply a rapid decompression, fragmentation and chilling. This implies some form of phreatomagmatism but with high-efficiency to generate such a large blast – e.g., via propagation of stress waves and thermal contraction rapidly increasing a magma surface area for interaction. The ash is fine-grained and poorly sorted overall. Less than 20 wt.% of ash particles are >1 mm at 80 km SE of the volcano on the main island of Tongatapu, while 70 km NE of the volcano (Nomuka Island) has finer ash, with only 2% of particles >1 mm. It appears that the dispersion axis for the event was directed toward the E or ESE, across the main population centre of Nuku’alofa on Tongatapu. Of the fine fraction 20 wt.% is < 30 micron, 8 wt.% <10 micron but unusually few particles of very fine range (<0.05 wt.% finer than 1 micron). Variations in the mode and sorting of ash fall at different locations and angles from the vent show that there was potentially complex dispersal of ash from different phases of the 11-hour long eruption, and or different plume heights and fragmentation processes involved. Plume observations suggest at least two different plume levels during main phases of the eruption and the fragmentation mechanisms likely varied from the blast-generating phase and the lesser-explosive phases leading up to and following this.

How to cite: Paredes-Mariño, J., White, J., Dürig, T., Baxter, R., Kula, T., Cronin, S., Ukstins, I., Wu, J., Adams, D., Brenna, M., and Brooks-Clarke, I.: Understanding fragmentation mechanism(s) during the 15 January 2022 Hunga Volcano (Tonga) eruption through particle characteristics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13587, https://doi.org/10.5194/egusphere-egu22-13587, 2022.

EGU22-13588 | Presentations | ITS3.6/SM1.2

The global reach of the 2022 Tonga volcanic eruption 

Jadranka Sepic, Igor Medvedev, Isaac Fine, Richard Thomson, and Alexander Rabinovich

The Tonga volcanic eruption of 15 January 2022 generated tsunami waves that impacted the entire Global Ocean as far away as 18,000 km from the source in the tropical Pacific Ocean. A defining characteristic of the tsunami was the dual forcing mechanism that sent oceanic waves radiating outward from the source at the longwave speed and atmospheric pressure Lamb waves radiating around the globe at the speed of sound (i.e. roughly 1.5 times faster than the longwave phase speed). Based on time series from several hundred high-resolution observational sites, we constructed global maps of the oceanic tsunami waves and the atmospheric Lamb waves. In some areas of the Pacific Ocean, we were able to distinguish between the two types of motions and estimate their relative contribution. A global numerical model of tsunami waves was constructed and results from the model compared with the observations. The modeled and observed tsunami wave heights were in good agreement. The global maps also enabled us to identify regional “hot spots” where the tsunami heights were highest. In addition to areas in the Pacific Ocean (Chile, New Zealand, Japan, the U.S. West Coast, and the Alaska/Aleutian Islands), “hot regions” included the Western Mediterranean and the Atlantic coasts of Europe and northern Africa.

How to cite: Sepic, J., Medvedev, I., Fine, I., Thomson, R., and Rabinovich, A.: The global reach of the 2022 Tonga volcanic eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13588, https://doi.org/10.5194/egusphere-egu22-13588, 2022.

EGU22-13589 | Presentations | ITS3.6/SM1.2 | Highlight

Numerical investigations on different possible generating mechanisms for the tsunami following the January 15 2022 Hunga Tonga-Hunga Ha’apai eruption 

Alberto Armigliato, Cesare Angeli, Glauco Gallotti, Stefano Tinti, Martina Zanetti, and Filippo Zaniboni

The Hunga Tonga-Hunga Ha’apai eruption of January 15 2022 was the culminating event of a sequence of seismic and volcanic events starting back in December 2021. The January 15 eruption manifested itself above the sea level with a number of phenomena, including the generation of a convective column ascending well into the stratosphere, pyroclastic flows travelling over the sea surface, an atmospheric pressure wave recorded by several instruments around the globe, and a tsunami, that represents the main focus of this study.

The tsunami that followed the eruption was observed both in the near-field and in the far-field, propagating across the entire Pacific Ocean and causing damage and loss of lives as far as Peru. In the near-field (Tonga archipelago) it is trickier to distinguish the damage induced by the impact of the eruption and the tsunami waves.

It is still not clear what the main generating mechanism for the ensuing tsunami was. In this contribution, several different hypotheses are investigated, adopting simplified models ranging from the submerged volcanic edifice collapse to the phreatomagmatic explosion and to the atmospheric pressure wave that was recorded across the entire globe. The propagation of the tsunami is simulated numerically with both non-dispersive and dispersive codes. Different spatial scales and resolutions are adopted to check the relative weight of the different generating mechanisms in the near- and in the far-field. Tentative conclusions are drawn by comparing the simulated results with the available experimental data in terms of tide-gauge records and near-field coastal impact.

How to cite: Armigliato, A., Angeli, C., Gallotti, G., Tinti, S., Zanetti, M., and Zaniboni, F.: Numerical investigations on different possible generating mechanisms for the tsunami following the January 15 2022 Hunga Tonga-Hunga Ha’apai eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13589, https://doi.org/10.5194/egusphere-egu22-13589, 2022.

EGU22-13590 | Presentations | ITS3.6/SM1.2 | Highlight

Caldera subsidence during the Hunga-Tonga explosive eruption? 

Thomas R. Walter and Simone Cesca and the GFZ-DLR-Geomar Task Force Team

The Hunga-Tonga eruption culminated on January 15, 2022, with a high-intensity Plinian eruption exceeding 20 km height, tsunamis affecting local islands and the circumpacific region, locally air-coupled seismic surface waves recorded at teleseismic distances, and explosive shock waves that repeatedly travelled around the world. Hunga-Tonga is a flat-topped volcano that rises about 1700 m above the seafloor, hosting a submarine 3-4 km diameter caldera floor that lies at less than 200 m water depth and is surrounded by an elevated, approx. 100-200 m high caldera wall. Only small parts of the volcano are rising at the caldera wall above the sea level, such as the islands Hunga Tonga Hunga Ha'apai in the north and small unnamed rocks in the south. Satellite imagery acquired by Pleiades and Sentinel 1A suggests that during the January 15, 2022 eruption, the central part of the Hunga Tonga Hunga Ha'apai as well as the small rocks in the south disappeared. By analysing satellite radar and imagery, we constrain island perimeters and morphologies before and after the eruption, to find evidence for island subsidence and erosion. In addition, seismic data recorded during the January 15, 2022 eruption was analysed in the time and frequency domains, revealing high amplitude activity over ~1 hr. The comparison of seismic, GNSS and local tsunami recordings gives insights into the time-succession of the eruption. For instance, moment tensor inversion suggests that the largest amplitude seismic signal was produced by a dominant tensile non-double component, characteristic of volcanic explosions. Furthermore, we also found evidence for reverse polarity mechanisms in agreement with subsidence of a caldera, possibly indicating incremental activity of a ring fault. We discuss the possible contribution of a caldera to the evolving eruption dynamics and the need to improve geophysical monitoring of this island arc in general and acquire high-resolution submarine data Hunga Tonga Hunga Ha'apai in specific.

How to cite: Walter, T. R. and Cesca, S. and the GFZ-DLR-Geomar Task Force Team: Caldera subsidence during the Hunga-Tonga explosive eruption?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13590, https://doi.org/10.5194/egusphere-egu22-13590, 2022.

EGU22-13591 | Presentations | ITS3.6/SM1.2

Volcanogenic tsunami on January 15, 2022: insights from deep-ocean measurements 

Mikhail Nosov, Kirill Sementsov, Sergey Kolesov, and Vasilisa Pryadun

The explosive eruption of the Hunga Tonga-Hunga Ha'apai volcano on January 15, 2022 triggered tsunami waves that were observed throughout the Pacific Ocean. In particular, the waves were recorded by several dozen deep-ocean DART stations located at source distances from hundreds to more than 10 thousand kilometers. Our study is aimed at analyzing tsunami waveforms recorded by DART stations in order to identify the formation mechanisms of this volcanogenic tsunami. Waveforms are processed using wavelet analysis. The arrival times of signals of different genesis are estimated making use robust physical assumptions, numerical modeling and satellite images. It has been found that in all records the tsunami signal is clearly observed long before the calculated moment of arrival of gravity surface waves caused by sources localized in the immediate vicinity of the volcano. On the records obtained by distant stations (~10000 km) dispersive gravity waves arrive with a delay of several hours after the signals following the passage of acoustic wave in the atmosphere. In addition to the analysis of waveforms, theoretical estimates of the amplitude of gravity waves in the ocean, caused by an acoustic wave in the atmosphere, will be presented. We also provide a theoretical estimate on how acoustic waves in the atmosphere manifest in pressure variations recorded by an ocean-bottom sensor.

This study was funded by a grant of the Russian Science Foundation № 22-27-00415, https://rscf.ru/en/project/22-27-00415/.

How to cite: Nosov, M., Sementsov, K., Kolesov, S., and Pryadun, V.: Volcanogenic tsunami on January 15, 2022: insights from deep-ocean measurements, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13591, https://doi.org/10.5194/egusphere-egu22-13591, 2022.

EGU22-13592 | Presentations | ITS3.6/SM1.2 | Highlight

The Near Real time analysis of Hunga Tonga-Hunga Ha’apai eruption in the ionosphere by GNSS 

Boris Maletckii and Elvira Astafyeva

The 15th January 2022 Hunga Tonga- Hunga Ha’apai (HTHH) volcano explosion is one of the most powerful eruptive events over the last 30 years. Based on early computations, its VEI was at least 5. The explosion caused atmospheric air shock waves that propagated around the globe, and also generated a tsunami. All these effects seemed to have produced quite a significant response in the ionosphere.

In this contribution, we analyze the ionospheric disturbances generated by the HTHH volcano eruption by using ground-based 8 GNSS receivers located in the near-field of the volcano (i.e., less than 2000 km). We test our previously developed methods to detect and locate the explosive event and its ionospheric signatures in a near-real-time (NRT) scenario. 

To detect co-volcanic ionospheric disturbances (co-VID), we use the TEC time derivative approach that was previously used for detection of ionospheric disturbances generated by large earthquakes. For this event, we modified the previously developed method to proceed not only 1-second but also 30 sec data. This approach detects the first perturbations ~12-15 minutes after the eruption onset. Further, it estimates the instantaneous velocities in a near field to be about ~500-800 m/s. Finally, from the obtained velocity vectors and the azimuths of co-VID propagation we calculate the position of the source in the ionosphere. 

Besides, we used the same TEC time derivative approach to produce NRT Travel Time Diagrams. The NRT TTD additionally verify the correlation with the source and velocities’ values.

How to cite: Maletckii, B. and Astafyeva, E.: The Near Real time analysis of Hunga Tonga-Hunga Ha’apai eruption in the ionosphere by GNSS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13592, https://doi.org/10.5194/egusphere-egu22-13592, 2022.

EGU22-13593 | Presentations | ITS3.6/SM1.2

Stratospheric observations of acoustic-gravity waves from the Hunga-Tonga eruption 

Aurélien Podglajen, Raphaël Garcia, Solene Gerier, Alain Hauchecorne, Albert Hertzog, Alexis Le Pichon, Francois Lott, and Christophe Millet

In the frame of the Strateole 2 balloon project, 17 long-duration stratospheric balloons were launched from Seychelles in fall 2021. At the time of the main eruption of Hunga-Tonga on January 15 2022, two balloons were still in flight over the tropical Pacific, respectively at altitudes of 20 and 18.5 km, and distances of 2,200 and 7,600 km from the volcano. The balloon measurements include wind, temperature and pressure at a sampling rate of 1 Hz. Those observations of this extreme event at that altitude are unique.

In this presentation, we will describe the observations of multiple wave trains by the balloons. The signature of the Lamb wave and infrasounds are particularly striking. The characteristics of the eruption and its scenario will be examined using a synergy of stratospheric in situ observations, ground observations and geostationary satellite images. Finally, we will discuss the complementarity of balloon observations with respect to the ground network due to their altitude and geographic location with respect to the source.

How to cite: Podglajen, A., Garcia, R., Gerier, S., Hauchecorne, A., Hertzog, A., Le Pichon, A., Lott, F., and Millet, C.: Stratospheric observations of acoustic-gravity waves from the Hunga-Tonga eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13593, https://doi.org/10.5194/egusphere-egu22-13593, 2022.

EGU22-13594 | Presentations | ITS3.6/SM1.2 | Highlight

Observation and simulation of the meteotsunami generated in the Mediterranean Sea by the Tonga eruption on 15 January 2022 

Audrey Gailler, Philippe Heinrich, Vincent Rey, Hélène Hébert, Aurélien Dupont, Constantino Listowski, Edouard Forestier, and Stavros Ntafis

Meteotsunamis are long ocean waves generated by atmospheric disturbances. The Tonga volcano eruption on 15 January 2022 generated a Lamb pressure wave propagating all over the globe and generating a tsunami observed at most tide gauges in the world. A first atmospheric wave arrived 20 hours after the eruption on the French Mediterranean coasts and propagated southward. This abrupt atmospheric pressure change was recorded by hundreds of barometers of weather stations around Europe. A second one originating from Africa was observed four hours later with an attenuated amplitude. The first wave can be roughly defined by a sinusoid signal with a period close to one hour and an amplitude of 150 Pa. The associated tsunami was observed by the French stations of the HTM-NET network (https://htmnet.mio.osupytheas.fr/) [1]. Amplitudes range from a few cm to 15 cm and periods range from 20 min to 1 hour.

 

Numerical simulation of the tsunami is performed by the operational code Taitoko developed at CEA [2]. The nested multigrid approach is used to simulate the water waves propagating in the bay of Toulon. The meteotsunami is generated by calculating analytically the atmospheric pressure gradient in the momentum equation. Comparisons of time series between numerical solutions and records are very satisfactory in regions defined by a high resolution topo-bathymetry. A second tsunami simulation is performed by introducing a second pressure wave propagating in the North direction and reaching the HTM-NET stations 4 hours after the first arrival. This second pressure wave results in additional and higher tsunami water waves in agreement with records.

 

 

[1] Rey, V., Dufresne, C., Fuda, J. L., Mallarino, D., Missamou, T., Paugam, C., Rougier, G., Taupier-Letage, I., On the use of long term observation of water level and temperature along the shore for a better understanding of the dynamics: Example of Toulon area, France Ocean Dyn., 2020, https://doi.org/10.1007/s10236-020-01363-7.

[2] Heinrich, P, Jamelot, A., Cauquis, A., Gailler A., 2021. Taitoko, an advanced code for tsunami propagation, developed at the French Tsunami Warning Centers. European Journal of Mechanics - B/Fluids 88(84) . DOI: 10.1016/j.euromechflu.2021.03.001.

How to cite: Gailler, A., Heinrich, P., Rey, V., Hébert, H., Dupont, A., Listowski, C., Forestier, E., and Ntafis, S.: Observation and simulation of the meteotsunami generated in the Mediterranean Sea by the Tonga eruption on 15 January 2022, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13594, https://doi.org/10.5194/egusphere-egu22-13594, 2022.

EGU22-13595 | Presentations | ITS3.6/SM1.2

Persistence Hunga Tonga plume in the stratosphere and its journey around the Earth. 

Bernard Legras, Sergey Khaykin, Aurélien Podglajen, and Pasquale Sellitto and the ASTuS

The Hunga Tonga eruption has generated an atmospheric plume rising above 40 km,  establishing an observational record. Due to the explosive nature of the eruption with a lot of water, the plume carried an unprecedented amount of water and a cloud of sulfated aerosols and possibly ultra-thin ashes was released. The aerosols have already persisted for four weeks with peak scatterring ratio initially above 200 that are still above 30 on many patches, as seen from CALIOP. These high values combined with low depolarization suggest a large amount of small sub-micronic spherical particles, confirmed by in situ balloon measurements. This is compatible with dominance of sulfated aerosols.

As the stratospheric flow has been mostly zonal with no breaking wave during the period and region of interest, and the horizontal shear dominates, the plume has been mostly dispersed in longitude keeping a similar latitudinal vertical pattern from the early days. A part has migrated to the tropical band reaching 10°N. Several concentrated patches have been preserved in particular a "mushroom" like pattern at 20S which has already circulated once around the Earth. . We will discuss the stability of this pattern in relation with vortical and thermal structures that are detected from several instruments and the meteorological analysis.

We will also discuss the likely impact on the stratospheric composition and the radiative effect on the yearly basis.  

How to cite: Legras, B., Khaykin, S., Podglajen, A., and Sellitto, P. and the ASTuS: Persistence Hunga Tonga plume in the stratosphere and its journey around the Earth., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13595, https://doi.org/10.5194/egusphere-egu22-13595, 2022.

EGU22-13598 | Presentations | ITS3.6/SM1.2

A global analysis of deep infrasound produced by the January 2022 eruption of Hunga volcano 

Julien Vergoz, Alexis Le Pichon, Constantino Listowski, Patrick Hupe, Christopher Pilger, Peter Gaebler, Lars Ceranna, Milton Garcés, Emanuele Marchetti, Philippe Labazuy, Pierrick Mialle, Quentin Brissaud, Peter Näsholm, Nikolai Shapiro, and Piero Poli

The eruption of Hunga volcano, Tonga is the most energetic event recorded by the infrasound component of the global International Monitoring System (IMS). Infrasound, acoustic-gravity and Lamb waves were recorded by all 53 operational stations after circling four times the globe. The atmospheric waves recorded globally exhibit amplitude and period comparable to the ones observed following the 1883 Krakatoa eruptions. In the context of the future verification of the Comprehensive Nuclear-Test-Ban Treaty, this event provides a prominent milestone for studying in detail infrasound propagation around the globe for almost one week as well as for calibrating the performance of the IMS network in a broad frequency band.

How to cite: Vergoz, J., Le Pichon, A., Listowski, C., Hupe, P., Pilger, C., Gaebler, P., Ceranna, L., Garcés, M., Marchetti, E., Labazuy, P., Mialle, P., Brissaud, Q., Näsholm, P., Shapiro, N., and Poli, P.: A global analysis of deep infrasound produced by the January 2022 eruption of Hunga volcano, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13598, https://doi.org/10.5194/egusphere-egu22-13598, 2022.

EGU22-13599 | Presentations | ITS3.6/SM1.2

Early evolution of the Hunga – Tonga Volcanic Plume from Lidar Observations at Reunion Island (Indian Ocean, 21°S, 55°E) 

Alexandre Baron, Guillaume Payen, Valentin Duflot, Patrick Chazette, Sergey Khaykin, Yann Hello, Nicolas Marquestaut, Marion Ranaivombola, Nelson Bègue, Thierry Portafaix, and Jean-Pierre Cammas

Explosive volcanism periodically induces disturbances of the upper troposphere and low stratosphere. These injections of massive amount of aerosols, ash and gases perturb locally the physico-chemical balance of the impacted atmospheric layers, in particular the ozone concentration via heterogeneous chemistry on particles. On a larger scale some exceptional eruption can have a significant influence on the Earth radiative budget as it was the case following eruptions of El Chichon in 1982 and Mount Pinatubo in 1991.

On January 15, 2022, the Hunga-Tonga volcano erupted in the Tonga archipelago (20.5°S, 175.4°W). The Plinian eruption was of a rare intensity, especially because of the depth of the underwater caldera. The first estimates indicate a power between 10 and 15 Mt TNT, probably the most powerful since the eruption of Krakatoa in 1883. This short (~ 8min) but intense explosion whose pressure wave was observed all around the globe injected about 400 kt of material into the atmosphere (to be compared to the 20 Mt injected during the Mount Pinatubo eruption). The Volcano Stratospheric Plume (VSP) quickly moved westwards and then overflew the island of La Réunion (21°S, 55°E), located at ~12000 km away from Tonga.

In order to monitor the evolution of the VSP, lidar observations were performed at the Observatoire de Physique de l’Atmosphère de La Réunion (OPAR). This observatory is equipped with three lidars capable of stratospheric aerosols measurements at two wavelengths (355 nm and 532 nm). First observations were performed every night from 19 to 27 January 2022 when the first passage of the VSP occurred. The plume structures appeared to be highly variable along time, with altitudes ranging from 19 km to 36 km above the mean sea level while plume thicknesses were ranging from ~1 km to more than 3 km. Remarkable aerosol optical depth were associated with these stratospheric aerosol layers, up to 0.8 at 532 nm on January 21.

The temporal evolution of the VSP structure and optical properties will be presented and discussed.

How to cite: Baron, A., Payen, G., Duflot, V., Chazette, P., Khaykin, S., Hello, Y., Marquestaut, N., Ranaivombola, M., Bègue, N., Portafaix, T., and Cammas, J.-P.: Early evolution of the Hunga – Tonga Volcanic Plume from Lidar Observations at Reunion Island (Indian Ocean, 21°S, 55°E), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13599, https://doi.org/10.5194/egusphere-egu22-13599, 2022.

EGU22-13601 | Presentations | ITS3.6/SM1.2

The Hunga Tonga-Hunga Haʻapai hydration of the stratosphere 

Luis Millán, Lucien Froidevaux, Gloria Manney, Alyn Lambert, Nathaniel Livesey, Hugh Pumphrey, William Read, Michelle Santee, Michael Schwartz, Hui Su, Frank Werner, and Longtao Wu

Hunga Tonga-Hunga Haʻapai, a submarine volcano in the South Pacific, reached an eruption climax on 15 January 2022. The blast sent a plume of ash well into the stratosphere, triggered tsunami alerts across the world, and caused ionospheric disturbances. A few hours after the violent eruption, the Microwave Limb Sounder (MLS) measured enhanced values of water vapor at altitudes as high as 50 km - near the stratopause.
On the following days, as the plume dispersed, several MLS chemical species, including H2O and SO2, displayed elevated values, far exceeding any previous values in the 18-year record. In this presentation we discuss the validity of these measurements, the stratospheric evolution of the SO2 and H2O plumes, and, lastly, the implications of the large-scale hydration of the stratosphere by the eruption.

How to cite: Millán, L., Froidevaux, L., Manney, G., Lambert, A., Livesey, N., Pumphrey, H., Read, W., Santee, M., Schwartz, M., Su, H., Werner, F., and Wu, L.: The Hunga Tonga-Hunga Haʻapai hydration of the stratosphere, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13601, https://doi.org/10.5194/egusphere-egu22-13601, 2022.

ITS4 – Local solutions that have global impact: mitigation measures in geosciences to reduce the global temperature increase

EGU22-1425 | Presentations | ITS4.2/ERE1.11

Optimal design of nature-based solutions in highway runoff management based on resilience to climate and pollution load changes 

Mehrdad Ghorbani Mooselu, Helge Liltved, Mohammad Reza Alizadeh, and Sondre Meland

Sedimentation ponds (SPs) are nature-based solutions (NBSs) for sustainable stormwater management. SPs control the quantity and quality of runoff and promote biodiversity. Hence, the optimal design of SPs is crucial for ecosystems resilience in urban and natural environments. This study aims to optimize the design of roadside SPs in terms of location and surface area, considering the resilience to stressors such as climate changes and pollution load variations. Accordingly, the highway runoff in a new 22 km highway (E18 Arendal-Tvedestrand) in southern Norway was simulated by the storm water management model (SWMM). The quantity and quality (BOD and TSS values) of highway runoff in all probable scenarios of existing uncertainties were estimated for potential outfall points using the repeated execution model of SWMM coded in MATLAB®. The scenarios were defined based on applying best management practices (BMPs), including grass swale and infiltration trench in different sections of the road that work before SPs, climatic (rainfall quantity estimated by the LARS-WG model), and modeling uncertainties (buildup and washoff coefficients). The generated dataset was then applied to assess the resilience of sedimentation ponds in potential outfalls to climate change and pollution load shocks. The resiliency was quantified for three metrics, including the quantity and quality of receiving runoff to sedimentation ponds and biodiversity in ponds over 25 years (2020-2045). The biodiversity index was defined based on Shannon's Entropy computed from field observation in 12 highway sedimentation ponds across Norway. Using this procedure, it was determined that the proper arrangement of BMPs along the road and the optimal design of ponds enhance the resilience of SPs by 40% over time. This study makes important contributions to stormwater management, the resilient design of NBS, and achieving UN SDG6 (Clean water and sanitation).

How to cite: Ghorbani Mooselu, M., Liltved, H., Alizadeh, M. R., and Meland, S.: Optimal design of nature-based solutions in highway runoff management based on resilience to climate and pollution load changes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1425, https://doi.org/10.5194/egusphere-egu22-1425, 2022.

EGU22-1677 | Presentations | ITS4.2/ERE1.11

Effects of land use change for solar park development in the UK on ecosystem services 

Fabio Carvalho, Hannah Montag, Stuart Sharp, Piran White, Tom Clarkson, and Alona Armstrong

In the rush to decarbonise energy supplies to meet internationally agreed greenhouse gas emissions targets, solar parks (SPs) have proliferated around the world, with uncertain implications for the provision of ecosystem services (ES). SPs necessitate significant land use change due to low energy densities that could significantly affect the local environment. In the UK, SPs are commonly built on intensive arable land and managed as grasslands. This offers both risks and opportunities for ecosystem health, yet evidence of ecosystem consequences is scarce. Therefore, there is an urgent need to understand how ES assessments can be incorporated into land use decision making to promote SP development that simultaneously addresses the climate and biodiversity crises. We aim to provide some of the first scientific evidence to help answer this question by determining the effects of land use change for SPs in the UK on the provision of ecosystem services (e.g., biomass production, soil carbon storage) of hosting ecosystems. Through a Knowledge Transfer Partnership project between Lancaster University and Clarkson & Woods Ecological Consultants, 35 SPs in England and Wales were surveyed in summer 2021. Soil and vegetation data were collected from 420 sample plots (900 cm2) under different types of land use: underneath solar panels, between rows of solar arrays, and control sites (e.g., pastureland, areas set-aside for conservation). Total plant cover was significantly lower underneath solar panels and between solar arrays than on land set-aside for conservation, while land around the margins of SPs showed higher aboveground biomass of monocotyledons and forbs than on land underneath solar panels. Some measures of soil fertility (e.g., nitrogen) and soil organic matter, fractioned into particulate and mineral-associated organic matter, also varied significantly between these different land uses. These results have implications for land management within SPs and will enable optimisation of SP design and management to ensure the long-term delivery of ecosystem services within this fast-growing land use.

How to cite: Carvalho, F., Montag, H., Sharp, S., White, P., Clarkson, T., and Armstrong, A.: Effects of land use change for solar park development in the UK on ecosystem services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1677, https://doi.org/10.5194/egusphere-egu22-1677, 2022.

EGU22-2049 | Presentations | ITS4.2/ERE1.11

What locals want: (mapping) citizen preferences and priorities for an alpine river landscape 

Chiara Scaini, Ana Stritih, Constance Brouillet, and Anna Scaini

Sustainable river management frameworks are based on the connection between citizens and nature. So far, though, the relationship between rivers and local populations has played a marginal role in river management. We present a blueprint questionnaire to characterize the perception of cultural ecosystem services and flood risk by locals, and how preferences change across the river landscape. We investigate how locals value the river and whether their preferences are affected by characteristics such as place of residence, age, frequency of visits and relation to the river. The approach is tested on the Tagliamento river, the last major free-flowing river in the Alps, which is characterized by debates on flood protection, flood management and ecological conservation. The questionnaire was filled in by more than 4000 respondents, demonstrating huge interest and willingness to contribute with their opinion on this topic. A participatory map of favorite places shows that most of the river is valued/appreciated by locals, with a high preference for the landscape of the braided middle course. River conservation is the main priority for most respondents across different stakeholder groups, highlighting the need for nature-based solutions in flood-risk management and demonstrating the mismatch between management choices and citizens´ values and priorities. Land-use planning is identified as a factor that can increase flood risk. The results highlight the necessity to tackle conservation, risk management and land-use planning together in order to develop risk-oriented river management strategies. More generally, this work points out that any river intervention should be pondered carefully accounting for its environmental impact also in terms of loss of cultural ecosystem services.

How to cite: Scaini, C., Stritih, A., Brouillet, C., and Scaini, A.: What locals want: (mapping) citizen preferences and priorities for an alpine river landscape, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2049, https://doi.org/10.5194/egusphere-egu22-2049, 2022.

Check dam plays a crucial role in controlling soil erosion on the Loess Plateau and reducing sediment loads in the Yellow River. Moreover, sediment deposition in check dams also provides valuable information for understanding of soil erosion on the Loess Plateau. Study on the influence of rainfall patterns on sediment yield in small catchments scale is significant for the reasonable arrangement of soil and water conservation measures, particularly for complex environments such as the wind-water erosion crisscross region. This study estimated sediment yield trapped by the check dam in Laoyeman catchment based on deposited flood couplets formed in erosion rainfall events during the period 1978-2010. All erosive rainfall were divided into three rainfall patterns according to the precipitation, rainfall duration and rainfall erosivity, and the correspondence analysis between sediment yield and rainfall pattern was analyzed. Results showed that there were 1.1´105 t sediment deposited in the dam filed during the trapping history of the check dam as a whole. It has three obvious change stages, which had sediment yield of 4.53´104 t during 1978-1988, 4.48´104 t during 1988-1997, and 1.68´104 t during 1997-2010, respectively. The stage 1989-1997 had the fastest annual deposition rate of 4.98×103 t·year-1, 20.9% and 286% faster than stage 1978-1988 and stage 1998-2010. For similar rainfall pattern in these three stages, sediment yield and the characteristic of flood couplet change were closely related to both rainfall erosivity and land use types. This was also approved by the significant decrease of sediment yield on condition of similar rainfall pattern in a decade before and after the implementation of Grain for Green project indicated that this project made a great contribution to the control of soil erosion on the Loess Plateau. The impact of rainfall pattern on sediment yield indicated that the largest sediment yield is initiated under short duration and high intensity rainfall events, while the sediment in the reservoir area is mainly deposited under the rainfall pattern of moderate precipitation, erosivity and duration. That is the reason for the wettest year (1995) had relatively low sediment deposition, while the year (1982) had strong rainfall erosivity had the maximum annual sediment yield (1.68´104 t).

How to cite: Yin, M. and Zhang, J.: Influence of rainfall patterns on sediment yield in flood couplets of a check dam on the Chinese Loess Plateau, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3322, https://doi.org/10.5194/egusphere-egu22-3322, 2022.

EGU22-3770 | Presentations | ITS4.2/ERE1.11

The approach ‘think global, act local’ neglects the particular ecological value of ecosystems 

Guido J.M. Verstraeten and Willem W. Verstraeten

A sustainable society is considered as an organic system, called an ecosystem, wherein all possible connected parameters are contributing to the conservation and evolution of the ecosystem containing life and landscape against stress from outside. Any ecosystem contains species of mutually interacting organisms all contributing to a dynamic equilibrium. An ecosystem is characterized by a population carrying capacity.

Humans are the only species on earth without a specific ecosystem. They live everywhere. The evolution did not adapt the homo sapiens to some ecosystem, on the contrary humans transformed all ecosystems to their own environment. Nature transforms into environment when humans are managing an ecosystem and transform it to their environment by attributing to nature the concept of natural capital as first instrumental step to economic growth, considering pollution as collateral damage.

Inspired by Enlightenment Anthropology (Shallow Ecology and Naess´ Deep Ecology) the UN encourages humanity to transform the consumption of raw matter, energy and food into a more sustainable cleaner way and even to start transition of energy resources and human diet in order to dampen the effects of global warming. Economic policy supports technological procedures avoiding waste of raw material and stimulating sustainable production processes and sustainable recuperation of raw material inside the produced items. The energy transition and preferable industrial production method, however, is globally imposed top-down without examining the consequences for local life of humans, non-humans (e.g. wind turbines near human settlement, bird mortality, destruction of the ecosystems of the seafloor) and the landscape (e.g. solar energy systems on hillside, water dams). Moreover, the global view favors large scale in policy as well as in means of production. However, this global transition organization of the global environment establish the new order characterized by its global and universal action and is not in balance with local ecosystems characterized by diversity of life and human management (so called perverted adaptation). Nature is reduced to things and just rewarded in terms of natural capital to sustain a Global Urban Middleclass consumptive society.

Therefore, we adopt Aldo Leopold ‘Land ethics’ (1949) and apply it to the shear coast of Southwestern Finland. We summarize his ideas in three hot headlines: (i) The land ethic changes the role of Homo sapiens from conqueror of the land-community to plain member and citizen; (ii) We abuse land because we regard it as a commodity belonging to us. When we see land as a community to which we belong to, we may begin to use it with love and respect; (iii) Anything is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise. Participation to the ecosystem based on autonomous technology, i.e. not controlled, is focused on global energy transition to save the Universal Urban Middleclass Life. On the contrary, the concept of Land Ethics makes room for eco-development based on care for humans, culture, environment and nature in interaction with all ecosystems. In a nutshell: act local, interact global.

How to cite: Verstraeten, G. J. M. and Verstraeten, W. W.: The approach ‘think global, act local’ neglects the particular ecological value of ecosystems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3770, https://doi.org/10.5194/egusphere-egu22-3770, 2022.

EGU22-3784 | Presentations | ITS4.2/ERE1.11

Assessing the interconnections between the characteristics, perception, and valuation of Nature-Based Solutions: A case study from Aarhus, Denmark 

Martina Viti, Roland Löwe, Hjalte J.D. Sørup, Ursula S. Mcknight, and Karsten Arnbjerg-Nielsen

When assessing strategies for implementation of Nature-Based Solutions (NBS) it is fundamental to quantify all benefits for securing better, informed decision making. Particularly relevant is the quantification of their multiple co-benefits for communities and the environment. One of the most widespread techniques to quantify these values is to use contingent valuation (CV) methods, such as the Willingness-To-Pay (WTP) approach. Within the CV method, questionnaires are the main tool used to elicit the value attributed to a specific good by the respondents. However, many studies focus on site-specific economic valuation, whereby transferability to other locations is jeopardized. We therefore created a survey to explore how the valuation of an NBS is shaped by its relationship with the users (e.g. frequency and length of visits), and how these responses are linked to both the respondents and the sites’ characteristics (e.g. socio-economic status, size of the NBS, etc.).

We applied this method to a case study comprised of two distinct areas located in Aarhus, Denmark, asking users to explore their perception of the two NBS sites with different features. Both NBS sites have as overarching goals to (i) prevent flooding from cloudburst or water bodies, (ii) improve the biodiversity in the area, and (iii) benefit the local population, e.g. by providing more recreational areas. Despite these common goals, the two sites differ by a number of characteristics, i.e. size, location, and time passed since construction. One NBS involves a large artificial lake in a peri-urban setting, while the other is a small urban park. Respondents were allowed the option of either expressing a value for only one, or for both of the sites. 

We analyzed both responses that stated a WTP and protest votes, that is, responses that rejected the valuation scenario altogether. We found that older citizens are more likely to protest, as well as those not visiting the sites. For the respondents who accepted to state a WTP, their bids significantly increased when the improvement of nature and biodiversity was mentioned in the valuation scenario. Comparing the value given to the two different sites, the characteristics of the NBS seem to play a role in the respondents’ perception and use of the sites, which in turn enhances valuation. In our case study, people’s perception of the site and their relationship with it appear to have a stronger link with the WTP than their socio-economic characteristics. Specifically, frequency and length of visits, and interest in a good quality of nature were mostly related to a positive WTP.

The inclusion of people-NBS relational variables in benefit quantifications appears to be an essential tool to realize a more realistic economic valuation, as well as correctly design NBS in order to achieve the desired impacts. Understanding the underlying synergies between the multiple co-benefits of NBS, their features and the users’ perception is decisive for maximizing these strategies’ potential and avoiding missing opportunities.

How to cite: Viti, M., Löwe, R., Sørup, H. J. D., Mcknight, U. S., and Arnbjerg-Nielsen, K.: Assessing the interconnections between the characteristics, perception, and valuation of Nature-Based Solutions: A case study from Aarhus, Denmark, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3784, https://doi.org/10.5194/egusphere-egu22-3784, 2022.

EGU22-4971 | Presentations | ITS4.2/ERE1.11

Integrating remote sensing and social media data advances assessment of cultural ecosystem services 

Oleksandr Karasov, Stien Heremans, Mart Külvik, Artem Domnich, Iuliia Burdun, Ain Kull, Aveliina Helm, and Evelyn Uuemaa

Over the past decade, we witnessed a rapid growth in the use of social media data when assessing  cultural ecosystem services (CESs), like modelling the supply-demand relationships. Researchers increasingly use user-generated content (predominantly geotagged pictures and texts from Flickr, Twitter, VK.com) as a spatially explicit proxy of CES demand. However, for modelling CES supply most of such studies relied on simplistic geospatial data, such as land cover and digital elevation models. As a result, our understanding of the favourable environmental conditions underlying good landscape experience remains weak and overly generic.

Our study aims to detect the spatial disparities between population density and CES supply in Estonia in order to prioritise them for further in-depth CES assessment and green and blue infrastructure improvements. We relied on Flickr and VK.com photographs to detect the usage of three CESs: passive landscape watching, active outdoor recreation, and wildlife watching (biota observations at organism and community levels) with automated image content recognition via Clarifai API and subsequent topic modelling. Then, we used Landsat-8 cloudless mosaic, digital elevation and digital surface models, as well as land cover model to derive 526 environmental variables (textural, spectral indices and other indicators of landscape physiognomy) via the Google Earth Engine platform. We conducted an ensemble environmental niche modelling to analyse the relative strength and directions of relationships between these predictors and the observed occurrence of CES demand. Based on multicollinearity and relative importance analysis, we selected 21 relevant and non-collinear indicators of CES supply. With these indicators as inputs, we then trained five models, popular in environmental niche modelling: Boosted Regression Trees, Generalized Linear Model, Multivariate Adaptive Regression Spline, Maxent, and Random Forest. Random Forest performed better than the other models for all three CES types, with the average 10-fold cross-validation area under curve > 0.9 for landscape watching, >0.87 for outdoor recreation, and >0.85 for wildlife watching. Our modelling allowed us to estimate the share of the Estonian population residing in the spatial clusters of systematically high and low environmental suitability for three considered CESs. The share of the population residing in the clusters of low environmental suitability for landscape watching, outdoor recreation, and wildlife watching is 5.5%, 3.1%, and 7.3%, respectively. These results indicate that dozens of thousands of people in Estonia (population is >1.3 million) likely have fewer opportunities for everyday usage of considered CESs. However, these results are biased as there was not enough evidence in social media for CES use in some of these areas.

Although our results should be treated with caution, because social media data are likely to contain a considerable sampling bias, we have demonstrated the added value of remote sensing data for CES supply estimation. Given nearly global and continuously updated satellite imagery archives, remote sensing opens new perspectives for monitoring the loss and gains in landscape suitability for CES across temporal and spatial scales. As such, we can better account for the intangible underlying geospatial features that can influence  economic and environmental decision-making.

How to cite: Karasov, O., Heremans, S., Külvik, M., Domnich, A., Burdun, I., Kull, A., Helm, A., and Uuemaa, E.: Integrating remote sensing and social media data advances assessment of cultural ecosystem services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4971, https://doi.org/10.5194/egusphere-egu22-4971, 2022.

EGU22-6317 | Presentations | ITS4.2/ERE1.11

Effect of soil management practices on soil carbon dynamics under maize cultivation 

Michael Asante, Jesse Naab, Kwame Agyei Frimpong, Kalifa Traore, Juergen Augustin, and Mathias Hoffmann

An increasing world population and change in consumer preferences necessitate the need to increase food production to meet the demand of a changing world. Intensified agriculture and an accelerated climate crisis with increasing weather extremes threaten the resource base needed to improve crop production. Maize yield obtained by farmers in the guinea savannah zone of Ghana is generally low due to low soil fertility status resulting from continuous cropping coupled with low use of external inputs. Integrated Soil Fertility Management (ISFM) practices have proven to sustainably increase maize yield. However, majority of the farmers practicing ISFM till their land conventionally, potentially resulting in substantial greenhouse gases (GHG) emissions that contribute to global climate change. However, there is dearth of information on GHG emissions regarding crop production systems in sub-Saharan Africa in general and Ghana in particular. Hence, within a field trial we seek to investigate the impact of different tillage practices and ISFM applied to sustain maize yield, on net CO2 or ecosystem exchange (NEE) and net carbon (C) balance (NECB). The field trial was established at the Council for Scientific and Industrial Research-Savanna Agricultural Research Institute in Northern region of Ghana. A split plot design was used with the main plot treatments being conventional tillage and reduced tillage and the subplot treatments being factorial combination of organic and inorganic fertilizers at three levels each. To determine NEE and thereon based estimates of NECB, an innovative, customized, low-cost manual, dynamic closed chamber system was used. The system consists of transparent (V: 0.37 m3, A: 0.196 m2; for NEE measurements) and opaque chambers (for ecosystem respiration (Reco) measurements) of the same size. Diurnal regimes of Reco and NEE fluxes were measured twice a month by repeatedly deploying chambers for 5 to 10min on the 3 repetitive measurement plots (PVC frames inserted 5 cm deep into the soil as collars) per treatment. CO2 concentration increase and decrease over chamber deployment time was detected by portable, inexpensive Arduino based CO2 logging systems, consisting of a battery powered microcontroller (Arduino Uno) and data logging unit (3 sec frequency) connected to an NDIR-CO2 sensor (SCD30; ± 30 ppm accuracy), air temperature and humidity (DHT-22) as well as air pressure sensor (BMP280). Measured CO2 fluxes were subsequently gap-filled to obtain seasonal NEE. C import and export were further on added to NEE to determine the NECB for each treatment. In parallel to CO2 exchange measurement campaigns, agronomic and crop growth indices such as the normalized difference vegetation index (NDVI) were performed biweekly at all plots. Here we present NEE and NECB balances for the first crop growth period.

Keywords: Tillage, Integrated soil fertility management, CO2 emission, Zea mays, net ecosystem carbon balance (NECB)

How to cite: Asante, M., Naab, J., Agyei Frimpong, K., Traore, K., Augustin, J., and Hoffmann, M.: Effect of soil management practices on soil carbon dynamics under maize cultivation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6317, https://doi.org/10.5194/egusphere-egu22-6317, 2022.

EGU22-7121 | Presentations | ITS4.2/ERE1.11

Application of the International Guidelines on Natural and Nature Based Features for Flood Risk Management and the way forward 

Ralph Schielen, Chris Spray, Chris Haring, Jo Guy, and Lydia Burgess-Gamble

In 2021, the International Guidelines on Natural and Nature Based Features for Flood Risk Management  were published, as a result of a joint project between the Rijkswaterstaat (Netherlands), the Environment Agency (England) and the Army Corps of Engineers (USA). These Guidelines give direction in the application of Nature Based Solutions (NBS) for coastal and fluvial systems. In this contribution we will focus on the fluvial part of the guidelines. We will briefly discuss the process that lead to the origin of the Guidelines and discuss the intended use. It is important to realize that the location within a catchment, and the scale of a catchment determine the specifications of the most optimal NBS. Considering the classical ‘source-pathway-receptor’ approach, in the source of a catchment, NBS aim to hold back the water in the headwaters of larger catchments, enhancing management of water and sediment. In the pathways-receptor (floodplains),  NBS are more focussed on increasing the discharge capacity of the main stem. In smaller catchments, also temporarily storage of water in the floodplains occurs, if flooding of such a temporary nature can be accommodated. Rather than a detailed instruction guide, the Guidelines are intended to give best practices and list important points of attention when applying NBS. Furthermore, they act as inspiration through the many case studies that are listed.

We will also connect the Guidelines to other initiatives on the application of NBS, for example the impact that NBS might have on reaching the United Nations Sustainable Development Goals. This requires a proper assessment framework which has been developed in adjacent projects and which values the added co-benefits that NBS have, compared to grey or grey-green alternatives. These benefits are also addressed in the Guidelines. Finally, we will share some thoughts on upscaling and mainstreaming NBS and the actions that are needed to accomplish that.

How to cite: Schielen, R., Spray, C., Haring, C., Guy, J., and Burgess-Gamble, L.: Application of the International Guidelines on Natural and Nature Based Features for Flood Risk Management and the way forward, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7121, https://doi.org/10.5194/egusphere-egu22-7121, 2022.

PHUSICOS platform aims at gathering nature-based solutions (NBS) relevant to reduce hydro-geological risks in mountain landscapes. The platform can be accessed directly through a web portal. It is based on an Open Source CMS website, including a filter to store documents and a map server to bring ergonomic and powerful access.

To design the platform, an in-depth review of 11 existing platforms has been performed.  Furthermore, a list of metadata has been proposed to structure the information. These metadata have provided the baseline for database. The PHUSICOS platform currently references 176 NBS cases and 83 documents of interest (review articles, assessment papers…). It is continuously enriched through the contribution of NBS community.

For that, a questionnaire based on relevant data, necessary for the definition and identification of the NBS (metadata, to be used for searching the NBSs within the platform) has been defined to enter new entries. A preliminary analysis of the cases has been realized. To characterize and analyse the current 152 solutions, we have worked on the following four categories: The nature of impacted ecosystems, The hazard(s) concerned, The other challenges treated by the NBS, The type of exposed assets.

The platform also proposes a qualitative assessment of the NBSs collected according to 15 criteria related with five ambits: disaster risk reduction, technical and economical feasibility, environment, society, and local economy. The criteria level is sufficiently general to be analysed for the entire PHUSICOS platform NBSs whatever the type of work, the realized approaches, the problematic or the spatial or temporal scale.

The structure of the platform and a first analysis of the qualitative NBS assessment are presented in this work.

How to cite: Bernardie, S., Baills, A., and Garçin, M.: PHUSICOS platform, dedicated to Nature-Based Solutions for Risk Reduction and Environmental Issues in Hilly and Mountainous Lands : presentation and qualitative NBS assessment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7664, https://doi.org/10.5194/egusphere-egu22-7664, 2022.

An adequate strategy for water quality improvement in developing countries must consider the economic scarcity of water, the external factors that affect its quality, and the participation of multisectoral stakeholders in water management decisions. In addition, stronger links to nature can be established through methods inspired from nature to clean the water, such as artificial floating islands (AFI). Restoration of aquatic ecosystems with AFIs occurs as water passes beneath the floating mat and the roots of macrophytes take up metals and nutrients. In this context, we utilized Fuzzy Cognitive Maps (FCMs) to identify the principal concepts that affect water quality from different perspectives: political, economic, social, technological, environmental, and legal (PESTEL). We also theoretically explore the use of AFIs combined with different policies, to find the strategy that best adapts the local water situation.

By applying the principles of FCMs, different sources of knowledge can predict the effects of policy, and problems can be identified using the centrality index of the underlying graph theory. Thus, a two-step approach was implemented for our analysis: First, from 40 literature-based PESTEL concepts related to water quality deterioration, local experts in water management were invited to identify the most influential concepts and to include additional ones regarding the local water situation and policies to support the improvement of water quality. Second, workshops were organized, inviting members of communities to discuss the degree of cause-effect influence of the identified concepts, and also to include a water management policy, considering AFIs as one solution.

Three Ecuadorian communities distributed to cover representative ecosystems from the Pacific coast, Andean mountains, and Amazon floodplain were selected for this research, i.e. the community of Mogollón dominated by mangroves land cover, Chilla chico by páramos, and Awayaku by rainforest. According to the FCMs, 21 PESTEL concepts affect water quality in the páramos community and most of them are related to politics (23%) and the environment (23%). Community workshop at the same community identify that the major problem is related to natural water pollutants. For the mangrove community, 23 concepts were identified mainly driven (47%) by environmental concepts, whereas the communities see the major water quality issue in view of human exposure to environmental pollutants. In the case of the rainforest community, 19 concepts were recognized with 40% related to economics, whereas the communities identify the principal concern being the violation of environmental legislation. Regarding the potential implementation of AFIs, the páramos community concludes that AFIs should be implemented and coupled with environmental education programs. Additionally, water-related governmental institutions should be involved during realization. The mangrove community shows interest in AFIs, when combined with payment for ecosystem services. Finally, the rainforest community do not consider AFIs as a primary solution. Instead they propose the creation of a committee to denounce violations of water quality laws and to improve the educational level of community members. In conclusion, the FCM is a powerful tool to bring together the knowledge of multisectoral stakeholders and to analyse suitable strategies for the local improvement of water quality.

How to cite: Fonseca, K., Correa, A., and Breuer, L.: Using the fuzzy cognitive map approach to promote nature-based solutions as a strategy to improve water quality in Ecuadorian communities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8395, https://doi.org/10.5194/egusphere-egu22-8395, 2022.

EGU22-8884 | Presentations | ITS4.2/ERE1.11

Effects of the Nature-Based Solutions on the ecosystem services; an evaluation of the Piave River catchment (Italy) in a 2050 scenario 

Francesco Di Grazia, Luisa Galgani, Bruna Gumiero, Elena Troiani, and Steven A. Loiselle

Sustainable river management should consider potential impacts on ecosystem services in decision-making with respect to mitigating future climate impacts. In this respect, there is a clear need to better understand how nature-based solutions (NBS) can benefit specific ecosystem services, in particular within the complex spatial and temporal dynamics that characterize most river catchments. To capture these changes, ecosystem models require spatially explicit data that are often difficult to obtain for model development and validation. Citizen science allows for the participation of trained citizen volunteers in research or regulatory activities, resulting in increased data collection and increased participation of the general public in resource management.

In the present study, we examined the temporal and spatial drivers in nutrient and sediment delivery, carbon storage and sequestration and water yield in a major Italian river catchment and under different NBS scenarios. Information on climate, land use, soil and river conditions, as well as future climate scenarios, were used to explore future (2050) benefits of NBS on local and catchment scales, followed the national and European directives related to water quality (Directive 2000/60/EC) and habitat (Directive 92/43/EEC). We estimate the benefits of individual and combined NBS approaches related to river restoration and catchment reforestation.

How to cite: Di Grazia, F., Galgani, L., Gumiero, B., Troiani, E., and Loiselle, S. A.: Effects of the Nature-Based Solutions on the ecosystem services; an evaluation of the Piave River catchment (Italy) in a 2050 scenario, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8884, https://doi.org/10.5194/egusphere-egu22-8884, 2022.

EGU22-9397 | Presentations | ITS4.2/ERE1.11

Analysis of survival probability on multiple species using metapopulation model 

Eun sub Kim, Yong won Mo, Ji yeon Kim, and Dong kun Lee

The ecological concept of the meta population helps evaluate the effectiveness of conservation areas (Soule et al., 1988), and is used as a useful tool for evaluating responses between individuals to artificial stressors such as urbanization, habitat destruction, and fragmentation (Kawecki. 2004). In particular meta population model can help increase the accuracy of population estimation across various spatial scales and explain several interactions populations (Walther et al., 2002; Faborg, 2014). Previous studies have demonstrated that habitat destruction and fragmentation caused by urbanization can affect the viability of species in habitats due to reduced fertility and mobility, but papers on the selection of conservation areas can increase the viability of multi species according to the changing surroundings are insufficient. Therefore, this study analyzed the possibility of multi species surviving in the habitat using a meta population model for conservation area scenarios and analyzed the effect of habitat pattern changes on each population from various perspectives.

In order to analyze the survival probability of multi species in habitats by conservation area scenario, (1) setting the 15 virtual habitat spaces within 160ha, (2) Big & Small conservation scenarios considering habitat area, connection, and connection, (3) collecting and estimation of migration rate, home range, dispersal distance for biological species for analyzing the possibility of extinction by population. Finally, the change in the population of each population during period t was analyzed using the meta population model.

Overall, when the Big Conservation area was applied, the probability of extinction of all species was low, followed by the Big+Connectivity scenario. In addition, the probability of survival was similarly derived in the Small scenario and the Connectivity scenario. However, the preferred conservation scenarios for each classification population group were different depending on the conservation scenario. In particular, birds had a high probability of extinction in the small scenario, while small mammals had a low probability of extinction. Through this study, the effect on the change in the number of multi species according to the conservation area scenario was analyzed, which is expected to be used to evaluate the validity and effectiveness of setting up a conservation area in the future.

How to cite: Kim, E. S., Mo, Y. W., Kim, J. Y., and Lee, D. K.: Analysis of survival probability on multiple species using metapopulation model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9397, https://doi.org/10.5194/egusphere-egu22-9397, 2022.

EGU22-9474 | Presentations | ITS4.2/ERE1.11

Co-evaluating and -designing a Sustainable Agriculture Matrix for Austria in an international context 

Christian Folberth, Franz Sinabell, Thomas Schinko, and Susanne Hanger-Kopp

Agricultural ecosystems provide essential services mainly through food, feed, fiber and consequently income but they also contribute cultural, supporting and regulating services. In turn, farming can adversely affect ecosystem services, especially those from natural ecosystems, if farming practices are unsustainable.

Recently, a Sustainable Agriculture Matrix (SAM; https://doi.org/10.1016/j.oneear.2021.08.015) of indicators across environmental, economic, and social dimensions has been developed by an international research team to coherently quantify the sustainability of countries’ farming systems globally. The focus was on indicators that can be tracked over time and relate to performance to facilitate analyzes of synergies and trade-offs. At present, this indicator system is being co-evaluated with stakeholders in ten countries within an international consortium including Austria, to elicit stakeholders’ appraisal of the framework’s applicability in their specific geographical and socioeconomic context and eventually co-design a revised matrix based on stakeholders’ requirements.

A first workshop has shown that most indicators from the environmental dimension are useful for stakeholders in the Austrian context, but some need further refinements. Biodiversity, for example, is only considered via land cover change whereas threats to (agro-)biodiversity in Austria and the EU foremost occur in-situ. The economic dimension is ranking second in its usefulness for Austrian stakeholders with few indicators such as food loss being of little relevance. The indicators presently included in the social dimension are least relevant as they cover aspects such as land rights, undernourishment, and rural poverty, which do not pose major issues in Austria and more broadly the EU.

General concerns of stakeholders are the directionality of indicator ratings and their scope which is in part considered too narrow. E.g., high government expenditure for agriculture is considered positive in the matrix regardless of its purpose and may cause dependencies. Human nutrition is only included via undernourishment and soil nutrient status solely as surplus, whereas in both cases also the other extreme may be adverse. Accordingly, a bell-shaped indicator and rating would be favored in such cases. A general requirement was expressed for an additional context dimension. Governance arrangements and the overall socioeconomic situation are so far deliberately not included due to the focus on performance in the existing SAM. Yet, indicators describing such framework conditions can be essential to interpret synergies and trade-offs and the effectiveness of policy measures aiming at achieving SDGs. Beyond the evaluation of existing indicators, the stakeholder process yielded comprehensive suggestions for additional indicators, covering biodiversity, research and education, self-sufficiency, as well as various aspects of resilience and stability. Overall, the co-evaluation with stakeholders highlights that only few globally defined indicators are readily applicable in a regional context where consideration of local conditions and specifics is vital.

The proposed revisions are now being matched with available data across geographic scales to revise the matrix and perform further analyses on trade-offs and synergies. This will also include further context information to facilitate the evaluation of policies, ultimately allowing for improved policy-making to attain agricultural sustainability. Results will be further co-evaluated iteratively with stakeholders to eventually produce a globally applicable indicator system.

How to cite: Folberth, C., Sinabell, F., Schinko, T., and Hanger-Kopp, S.: Co-evaluating and -designing a Sustainable Agriculture Matrix for Austria in an international context, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9474, https://doi.org/10.5194/egusphere-egu22-9474, 2022.

EGU22-9476 | Presentations | ITS4.2/ERE1.11

Detection of Habitat Heterogeneity Changes Using Laser Scanning Data Targeting Birds 

Ji Yeon Kim, Dong Kun Lee, and Eun Sub Kim

Research dealing with three-dimensional structural data of forests or vegetation is increasing. LiDAR-based research to detect biodiversity (LaRue et al. 2019) is growing, through using structural data such as analyzing heterogeneity, distribution, and height in forest structures (Matsuo et al. 2021) or identifying rugosity (Gough et al. 2020). For example, the technology to detect canopy structures is linked with the GEDI technology, leading to structural diversity mapping on a wide scale and further to β-diversity. (Schneider et al. 2020) Meanwhile, most connectivity studies so far have been conducted on two-dimensional surfaces, and resistance value-based studies on species data, topography and vegetation structure, and habitat quality have been performed. In this study, we try to detect changes in the space distribution pattern of species due to anthropogenic intervention through lidar-based 3D structural data. Through structural heterogeneity, the connectivity at the landscape level is analyzed, and for this purpose, it can be compared with the traditional diversity evaluation method through a verification process based on species data. By detecting the impact on species in advance in the impact assessment stage, this study intends to present a methodology that can function as a forestry and conservation decision-making support tool in combination with ICT-based monitoring technology.

How to cite: Kim, J. Y., Lee, D. K., and Kim, E. S.: Detection of Habitat Heterogeneity Changes Using Laser Scanning Data Targeting Birds, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9476, https://doi.org/10.5194/egusphere-egu22-9476, 2022.

EGU22-9542 | Presentations | ITS4.2/ERE1.11

Eliciting public preferences for wildfire management policies in Crete, Greece 

Haleema Misal, Ioannis Kountouris, Apostolos Voulgarakis, and Anastasios Rovithakis

Fire regimes form an integral part of terrestrial biomes in the Mediterranean region as they provide essential disturbances which change the structure and function of plants that favour Mediterranean type climates. Fire is inextricably linked to such ecosystems and cannot be excluded from them. However, the intensification of human activities in Greece, coupled with increasingly unpredictable wildfires has created huge imbalances and jeopardised the ecological integrity of ecosystems. Expansions into the wildland urban interface, rural abandonment, and the focus on fire suppression are increasing the vulnerability and flammability of the Greek environment. The duality of fire is delicate, both at local and national level, catastrophic wildfires singe deeply on landscapes and economies, social burns can take just as long to heal. In Greece, this is further exacerbated by the burgeoning socio-economic and political complexities that have catalysed the current ineffective and unsustainable fire management strategies. Damages from wildfires affect ecosystem services which can lead to a reduction in human wellbeing. Understanding the interactions between ecosystems and humans through environmental valuation is key to implementing effective policy. This study uses economic valuation methods in the form of a choice experiment to elicit public preference for a wildfire management policy in Crete. A survey was deployed around the island, with respondents asked about their preferences between different management strategies. The policies outlined in the survey are made up of the following attributes: risk of fire, agricultural production, landscape quality and post-wildfire damage mitigation. Results from this study indicate a positive preference by the public for a new proposed policy. The findings from this study can be used for decision making in Crete and other similar southern European environments by providing metrics for appropriate wildfire management.

How to cite: Misal, H., Kountouris, I., Voulgarakis, A., and Rovithakis, A.: Eliciting public preferences for wildfire management policies in Crete, Greece, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9542, https://doi.org/10.5194/egusphere-egu22-9542, 2022.

The concept of ecosystem services (ES), as a set of components of the natural capital that provide products and services directed to humans, was born around the middle of the last century, reaching a more systematic definition in the early 2000s with the Millennium Ecosystem Assessment (MA, 2005). This issue is implicitly linked to popular research topics, such as climate change,  population well-being, fight against hunger in the world and has undergone a significant increasing interest from scientific research since the SDGs subscription, defined in the 2030 Agenda.

With the thrust of the investigation into this new branch, various tools have been created aimed at dealing with ecosystem services, not only from a qualitative point of view but in quantitative terms. The present work aims to analyze the applicability of a specific SE quantification software for vegetation, based both on the use of meteorological data and on the acquisition of field data and capable of returning outputs relating to the main components: environment (air quality), soil (use and cover) and water (quality and quantity of water runoff, with a focus on vegetation hydrology). The combination of this eco-hydrological model with a monetary ES evaluation is also interesting: although the economic model considered is particularly simple and therefore characterized by a non-negligible standard error, it is important to underline the direct and spontaneous association between SE and monetary quantification considered by the software, unlike how at the end of the last century the economic value of nature was still neglected.

Finally, the main results of a ES quantification project in an Italian urban context will be discussed, underlining  the environmental improvement to the surroundings and the social benefits for the population.

How to cite: Busca, F. and Revelli, R.: Ecosystem services, monetary value and social sphere: a specific-vegetation software suite on a urban-scale project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11996, https://doi.org/10.5194/egusphere-egu22-11996, 2022.

EGU22-12354 | Presentations | ITS4.2/ERE1.11

Social capital in stressed social-ecological systems: understanding social learning in agricultural communities in China to aid environmental policy and practice 

Ying Zheng, Larissa A. Naylor, Weikai Wang, Alasdair Stanton, David Oliver, Neil Munro, Nai Rui Chng, Susan Waldron, and Tao Peng

Social learning is increasingly used to address environmental challenges including sustainable farming. How sustainable agricultural knowledge is co-produced, shared and used between farmers, scientists and government is important for building capacity and trust for sustainability in stressed socio-ecological communities worldwide. However, such understanding is largely lacking in developing economies. This research presents the findings from analysis of smallholder farmers’ social learning in three agricultural regions in China. Combining an existing social capital framework with questionnaires (Q) and interviews (I) with farmers (Q n=632; I n=30) and officials (Q n=77, I n=64), we demonstrate how farmers access and share farming knowledge through bonding, bridging and linking networks. In two regions, family bonding was the dominant learning pathway while linking networks to access ‘formal knowledge’ from government (or scientists) were limited. However, in the third region, government played a more important role in farmers’ knowledge sharing and acquisition processes. In all regions, learning from researchers was largely absent. Key suggestions about ways to enhance use of multiple forms of knowledge are provided. First, this study highlights the need for a more locally and socially embedded approach to facilitate enhanced farmers’ knowledge exchange and learning, to then build trust and capacity to help better address pressing local environmental challenges. Second, we show how social dynamics research can usefully inform knowledge exchange plans for collaborative, international development science, so that it can be best suited to local contexts, to optimise research impacts, capacity building and avoiding of mismatches. 

How to cite: Zheng, Y., Naylor, L. A., Wang, W., Stanton, A., Oliver, D., Munro, N., Chng, N. R., Waldron, S., and Peng, T.: Social capital in stressed social-ecological systems: understanding social learning in agricultural communities in China to aid environmental policy and practice, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12354, https://doi.org/10.5194/egusphere-egu22-12354, 2022.

To achieve the ambitious but necessary climate targets set by the Paris Agreement, the IPCC model pathways for limiting global warming to 1.5°C compared to pre-industrial levels make apparent the need for safeguarding and enhancing the natural global carbon sink – including via carbon dioxide removal (CDR). A range of ocean-based CDR approaches, also termed “negative emissions technologies” (NETs), has been proposed to make use of the ocean’s potential to take up carbon dioxide from the atmosphere and store it in water, biomass, and sediments. The governance framework in place to regulate CDR in the ocean, at this time, is limited to the direct and articulate regulation of ocean fertilization. Meanwhile, other NETs such as ocean alkalinity enhancement and artificial upwelling emerge, but a comprehensive and foresight-oriented regulation for the testing or even deploying at larger scale is missing. Specifically, there is large uncertainty on unintended (positive and negative) effects of these technologies on the condition of the ocean, in addition to enhanced carbon uptake and storage, and how these may impede on or support other global sustainability goals. The deployment of NETs in the ocean poses additional governance complexities relating to unknowns, uncertainties, and transboundary issues. In a study that is part of the EU H2020-project OceanNETs, we explore to what extent the current global governance framework directly or indirectly regulates emerging ocean-based NETs and reflect on the particularities and requirements for their comprehensive governance. The analysis considers the gaps, challenges, needs, and opportunities for comprehensive governance of ocean-based NETs. 

How to cite: Neumann, B. and Röschel, L.: Global governance of ocean-based negative emission technologies. Exploring gaps, challenges, and opportunities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-893, https://doi.org/10.5194/egusphere-egu22-893, 2022.

EGU22-1956 | Presentations | ITS4.4/ERE1.10

The implementation of ecological engineering in Tibet has strengthened the local human-policy-resource connection 

Yijia Wang, Yanxu Liu, Xutong Wu, Xinsheng Wang, Ying Yao, and Bojie Fu

Facing the dual threats of climate and socio-economic changes, how the social-ecological systems (SES) in the Tibet Autonomous Region can seize the opportunity of ecological restoration to enhance the quality of the environment while improving the relationship between human and nature is of great significance to promote the regional sustainable development. Thus, regarding human as the key component, we used Ostrom’s SES framework as an analytical fundation to analyze the impact of the implementation of ecological engineering on local human-policy-resource connection. We distributed questionnaires for local residents, distinguished experimental groups (EG, n=325) and control groups (CG,n =165), and used a network approach to construct indicators for assessing effectiveness of ecological engineering, including overall connectivity and evenness. Meanwhile, random forest regression was used to explore the background variables of the dominant connection and accordingly proposed subsequent directions for optimal governance. We found that interviewees in areas where ecological engineering was implemented had more positive perceptions of the importance of ecosystem services, the relationship between ecological conservation and well-being, attitudes toward ecological engineering, and the impact of measures. The overall connectivity and evenness of EG were significantly higher than that of CG. The implementation of ecological engineering enhanced the connection between local people and the environment, but caused some inconvenience to local residents’ livelihoods. Besides, elevation and annual precipitation were the background variables that dominated the overall connectivity. The overall connectivity was lower in alpine steppes with elevation of around 4000 m and semi-arid areas with annual precipitation around 400-500 mm. The implementation of ecological engineering played a positive role in alleviating human-nature relationship in tensions and promoting collective governance of common pool resources, but the governance process still involved risks. Safeguarding and improving the residents’ livelihoods and enhancing the regional weak SES coupling due to geographical constraints are the future directions for optimal governance.

How to cite: Wang, Y., Liu, Y., Wu, X., Wang, X., Yao, Y., and Fu, B.: The implementation of ecological engineering in Tibet has strengthened the local human-policy-resource connection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1956, https://doi.org/10.5194/egusphere-egu22-1956, 2022.

EGU22-1978 | Presentations | ITS4.4/ERE1.10

Mapping NBS stakeholders’ perspective over Sludge Treatment Reed Bed (STRB) in Iceland 

Amir Gholipour, Elizabeth Duarte, Rita Fragoso, Ana Galvao, and David Christian Finger

Nature-Based Solutions (NBSs) like Sludge Treatment Reed Beds (STRBs) can address resource recovery from sewage sludge in urban and rural areas to boost circular economy and to mitigate climate change. To ensure successful implementation of STRBs, an evaluation of stakeholders’ perceptions can be helpful to identify relevant barriers and opportunities. In this study, semi-structured interviews were conducted with relevant stakeholders, which were categorized in 5 interest groups including academics, state and governments, NGOs, water companies and local communities across Iceland. The interviews were then transcribed and effective elements influencing STRB technology in Iceland were identified through an open-coded method on the transcriptions. The elements were categorized as independent elements (NBS actors, on-going projects, feasibility, legal, economic, sociological, and natural criteria), which were grouped into 7 classifications impacting dependent elements (relevant aspects of STRB, STRB services and system cost). Through Causal Diagrams (CDs), the impact of the independent elements was visualized on the dependent elements. The result of the study is exposed in 8 causal networks and 4 aggregated CDs for sustainability, climate change, biodiversity and circular economy together with mediators interpreting the impacts. The complexity of multi-sequenced causalities of a heterogeneous nature is depicted in CDs implying by stakeholders’ reports and expectations. The study exposes information on the compatible aspects, where further research is required to facilitate the use of STRB for the resource recovery of sewage sludge in Iceland. Therefore, our findings can enable decision makers with intracommunity information to identify elements impacting STRB application, in which the influence of the multiple groups of interests is regarded. 

 

Keywords: Nature-Based Solutions; Sludge Treatment Reed Beds, Resource Recovery, Causal Diagram, climate change, circular economy, sustainability

How to cite: Gholipour, A., Duarte, E., Fragoso, R., Galvao, A., and Christian Finger, D.: Mapping NBS stakeholders’ perspective over Sludge Treatment Reed Bed (STRB) in Iceland, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1978, https://doi.org/10.5194/egusphere-egu22-1978, 2022.

EGU22-4699 | Presentations | ITS4.4/ERE1.10

Assessing global macroalgal carbon dioxide removal potential using a high-resolution ocean biogeochemistry model 

Manon Berger, Laurent Bopp, David T. Ho, and Lester Kwiatkowski

Carbon dioxide removal (CDR) has become part of the portfolio of solutions to mitigate climate change. In combination with emission reductions, CDR may be critical to achieving the goal of limiting global warming to below 2°C, as outlined in the Paris Agreement. Due to its potential high productivity and environmental co-benefits, macroalgae cultivation has recently become a prominent ocean-based CDR strategy. However, estimates of the CDR potential of large-scale deployment are highly limited. Here we simulate idealized global deployment of macroalgae-based CDR using the NEMO-PISCESv2 ocean biogeochemical model at high spatial resolution (0.25° nominal horizontal resolution). Macroalgae growth is confined to the upper 100m of the water column in Exclusive Economic Zones (EEZ) free of sea ice and with an appropriate nitrate/phosphate regime. Although the loss of dissolved inorganic carbon (DIC) through macroalgal growth enhances the flux of atmospheric carbon into the ocean, this increase in carbon uptake is less than the rate of macroalgal production. In the absence of any nutrient limitation on growth, the enhancement in ocean carbon uptake is only 73-77% of the carbon lost from the water column due to macroalgal production. However, when macroalgae nutrient limitation/uptake is additionally accounted for, the increase in ocean carbon uptake accounts for only 41-42% of the potential carbon lost through macroalgae production. These inefficiencies are due to ocean transport replacing part of the DIC lost in the upper water column with DIC from depth, the influence of local nutrient concentrations on the vertical profile of macroalgal production, and feedbacks on the nutrient resources available for phytoplankton net primary production. CDR efficiency is shown to scale near-linearly between scenarios assuming 1% to 10% of the global EEZ area is cultivated for macroalgae. The efficiency of macroalgal CDR shows significant regional variability, with much of the enhancement in ocean carbon uptake (43%-46%) occurring outside EEZs, posing potential difficulties to national scale accounting.

How to cite: Berger, M., Bopp, L., Ho, D. T., and Kwiatkowski, L.: Assessing global macroalgal carbon dioxide removal potential using a high-resolution ocean biogeochemistry model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4699, https://doi.org/10.5194/egusphere-egu22-4699, 2022.

EGU22-5175 | Presentations | ITS4.4/ERE1.10 | Highlight

Completing Urban GHG Emissions Data to Assess the Effectiveness of Climate Action Plans in Europe 

Jessica Page, Haozhi Pan, and Zahra Kalantari

Urban areas are major contributors to global greenhouse gas (GHG) emissions. To address climate change, many cities have developed climate action plans (CAPs) as strategic roadmaps to reduce their emissions and strive for emission neutrality and climate resilience by 2050 or before. It has been more than a decade since the first of these plans were put in place, and it is now important to evaluate these plans and to access whether city-level climate ambitions will be realised or perhaps need adjustment to pursue for improvements in climate resilience over time

 This work aims to further our understanding of urban GHG emissions, by completing existing urban carbon emissions data with blue-green contributions to the urban carbon cycle. In a previous study, it was found that the inclusion of blue-green emissions in urban carbon accounting in Stockholm, Sweden had a significant impact on that region’s ability to reach net zero emissions in the coming decades (Page et al., 2021). In this study, we complete the urban emissions data for cities across the European Union (EU) in order to assess if, and for which types of cities, the inclusion of blue-green emissions in the GHG accounting is similarly relevant.

Furthermore, we will use data about the CAPs produced and implemented by these cities together with the completed GHG emissions in order to assess whether the actions and plans made by many European cities have actually had any impact on the emissions from these cities. The inclusion of blue-green emissions and sequestrations in this assessment is particularly important, as many of the strategies included in CAPs impact blue-green areas, such as the implementation of nature-based solutions (NBS).

Conclusions will be drawn about the role of green-blue areas in urban GHG emissions, the role which CAPs have played in reducing emissions in European cities, and how and where these could potentially be adapted to further reduce future GHG emissions in urban areas.

Keywords: Sustainable cities; Greenhouse Gas Emissions; Nature-based Solutions; Climate Action Plans

References:

Page J, Kåresdotter E, Destouni G, et al. (2021) A more complete accounting of greenhouse gas emissions and sequestration in urban landscapes. Anthropocene 34: 100296. DOI: 10.1016/j.ancene.2021.100296.

How to cite: Page, J., Pan, H., and Kalantari, Z.: Completing Urban GHG Emissions Data to Assess the Effectiveness of Climate Action Plans in Europe, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5175, https://doi.org/10.5194/egusphere-egu22-5175, 2022.

EGU22-6126 | Presentations | ITS4.4/ERE1.10

Investigating potential climatic side-effects of a large-scale deployment of photoelectrochemical devices for carbon dioxide removal 

Moritz Adam, Thomas Kleinen, Matthias M. May, Daniel Lörch, Arya Samanta, and Kira Rehfeld

Without substantial decarbonization of the global economy, rising atmospheric carbon dioxide (CO2) levels are projected to lead to severe impacts on ecosystems and human livelihoods. Integrated assessments of economy and climate therefore favour large-scale CO2 removal to reach ambitious temperature-stabilization targets. However, most of the proposed approaches to artificially remove CO2 from the atmosphere are in conflict with planetary boundaries due to land-use needs and they may come with unintended climatic side-effects. Long-term draw-down of CO2 by photoelectrochemical (PEC) reduction is a recent and promising approach that potentially entails a very low water footprint and could offer a variety of carbon sink products for safe geological storage. For renewable hydrogen fuel production, PEC devices have already been demonstrated to deliver high solar-to-fuel efficiencies. If such devices are adjusted to deliver high solar-to-carbon efficiencies for carbon dioxide removal, they would require comparably little land for achieving annual sequestration rates that are compatible with limiting global warming to 2°C or below. Yet, no production-scale prototype exists and the climatic side-effects of such an "artificial photosynthesis'' approach for negative emissions are unknown. Here, we discuss our work towards investigating potential impacts of PEC CO2 removal on the climate and the carbon cycle in simulations with the comprehensive Earth System Model MPI-ESM. We designed a scheme to represent hypothetical PEC devices as a land surface type which is influencing land-atmosphere energy and moisture fluxes. We parameterize the irradiation-driven carbon sequestration of the devices and interactively couple their deployment area and location to a negative emission target. We plan to compare the potential side-effects between scenarios of dense, localized deployment and spread-out, decentralized application. These scenarios represent different guiding objectives for deploying hypothetical PEC systems such as maximizing the insolation per module area, or mitigating the overall impacts on climate and on carbon stocks. For the different scenarios, we intend to investigate changes in the surface balances, which could impact atmospheric circulations patterns. We further plan to quantify the amount of land-stored carbon that is relocated due to land-use change, as this affects the amount of CO2 that can effectively be withdrawn from the atmosphere. Finally, we relate theoretical expectations for area requirements and CO2 withdrawal with results from the coupled simulations which could inform the technological development. While ambitious emission reductions remain the only appropriate measure for stabilizing anthropogenic warming, our work could advance the understanding of possible benefits and side-effects of hypothetical PEC CO2 removal.

M. M. May & K. Rehfeld, ESD Ideas: Photoelectrochemical carbon removal as negative emission technology. Earth Syst. Dynam. 10, 1–7 (2019).

How to cite: Adam, M., Kleinen, T., May, M. M., Lörch, D., Samanta, A., and Rehfeld, K.: Investigating potential climatic side-effects of a large-scale deployment of photoelectrochemical devices for carbon dioxide removal, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6126, https://doi.org/10.5194/egusphere-egu22-6126, 2022.

EGU22-7019 | Presentations | ITS4.4/ERE1.10

Forecasting impacts of climate change on plantation carbon sink capability 

Hung-En Li and Su-Ting Cheng

In the face of climate change, the government of Taiwan requires new mitigation policies and implementation strategies. As forest plantations are commonly accepted as great carbon sinks, developing reliable carbon systems linking forestry carbon sequestration into green carbon credits in the economic sector requires synergic integration to examine potential carbon sink capability of forest plantations under the ever-changing climate. In this regard, this study developed a process-based stand growth model based on the structure of the Physiological Principles for Predicting Growth (3-PG) for carbon sequestration estimations of Sugi plantations in the National Taiwan University (NTU) Experimental Forest. The model considered monthly solar radiation, temperature, precipitation, vapor pressure deficit (VPD), and the atmospheric carbon dioxide concentration to simulate dynamic biomass production, and then allocated the simulated biomass to root, stem, and foliage by allometric equations fitted to biomass data from the SugiHinoki Database. After that, the mortality of stand was determined by using a zero-inflated Poisson modelling on long-term growth data collected by the NTU Experimental Forest during 1921-2019. In addition, we performed a scenario analysis to forecast future stand growth under 4 climate scenarios of RCP2.6, RCP4.5, RCP6, and RCP8.5. Results revealed higher annual biomass increment (around 4 t ha-1y-1) in the end of the century in RCP6.0 and RCP8.5, and lower increment (around 2.5 t ha-1y-1) in RCP2.6 and RCP4.5. A step-wise multiple linear regression analysis on the simulated growth data and climatic inputs revealed stronger positive impact of CO2 concentration than precipitation on unit biomass primary production (NPP/Biomass). Temperature had comparable counter impact against precipitation, and solar radiation showed the least negative influence on unit biomass primary production. Based on this process-based stand growth model, we are able to dig into the relation between climatic variables and carbon sequestration rate, and help sketch prospect of plantations in the carbon market for plantation managers, investors, and policy makers.

How to cite: Li, H.-E. and Cheng, S.-T.: Forecasting impacts of climate change on plantation carbon sink capability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7019, https://doi.org/10.5194/egusphere-egu22-7019, 2022.

EGU22-7324 | Presentations | ITS4.4/ERE1.10

Assessing co-benefits of urban greening coupled with rainwater harvesting management under current and future climates across USA cities 

Ziyan Zhang, Athanasios Paschalis, Ana Mijic, Barnaby Dobson, and Adrian Butler

Globally, urban areas will face multiple water-related challenges in the near future. The main challenges are intensified droughts leading to water scarcity, increased flood risk due to extreme rainfall intensification, increased total water demand due to an increasing urban population, amplified urban heat island intensities due to urban sprawl, and reduction in urban carbon sink due to plant water stress. Urban greening is an excellent option for mitigating flood risk and excess urban heat. Meanwhile, rainwater harvesting (RWH) systems can cope with water supply needs and urban water management. In this study, we investigated how urban greening and RWH can work together to mitigate the aforementioned risks. We evaluate the joined-up management approach under climate projections for 30 cities in the USA spanning a variety of climates, population densities and urban landscapes. By incorporating a new RWH module in the urban ecohydrological model UT&C and flexible operational rules of reusing harvested water for domestic use and urban green space irrigation, we tested 4 intervention approaches: control, RWH installation, urban greening supported by RWH, and urban greening supported by traditional irrigation (i.e., supplying via mains water). Each intervention approach was evaluated using our adapted version of UT&C and forced by the last generation convection-permitting model simulations of current (2001-2011) and end-of-century (RCP8.5) climate from Weather Research and Forecasting (WRF). The volume of RWH is assumed to be 2000L per household for all cities. Results showed that neither urban greening nor RWH could contribute significantly to reducing the expected increase in canyon temperature, because of the strong change in background climate (i.e., increases in average atmospheric temperature). However, RWH alone can sufficiently reduce the intensifying surface flood risk and effectively enhance water conservation, and urban greening can significantly increase the carbon sink of cities especially in dry regions, and if supported by traditional irrigation. Those results vary with the background climate: the benefits of urban greening, either supported by RWH or traditional irrigation, on canyon temperature reduction and carbon sink improvement increased with average air temperature and decreased with wetness index respectively; the benefits of RWH on runoff reduction and water conservation are both positively dependent on local annual precipitation.

How to cite: Zhang, Z., Paschalis, A., Mijic, A., Dobson, B., and Butler, A.: Assessing co-benefits of urban greening coupled with rainwater harvesting management under current and future climates across USA cities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7324, https://doi.org/10.5194/egusphere-egu22-7324, 2022.

Multiple disaster risks are interconnected and are commonly caused by ecosystem degradation. Ecosystem degradation also drives many of the world's major problems, including biodiversity loss, climate change, and poverty. Ecosystem-based solutions such as ecosystem-based adaptation, biodiversity conservation, and community forestry are increasingly implemented in various contexts. However, little is known about possible interlinkages, synergies, and trade-offs among those ecosystem-based responses and potential barriers to their integration. This study explores spatial and conceptual synergies and trade-offs among ecosystem-based adaptation, biodiversity conservation, and community forestry and the barriers to implementing integrated actions.

The study was located in Ayeyarwady Delta, Myanmar. The research first used a comprehensive socio-ecological risk assessment framework and multi-risk impact chains to understand high-risk areas and identify potential areas for ecosystem-based adaptation. Potential areas for biodiversity conservation and community forestry respectively were then identified using criteria developed based on a literature review. At this point, spatial autocorrelations were tested, and a modified t-test was used to identify spatial relationships among them. Finally, qualitative expert interviews were conducted, and content analysis was used to understand conceptual synergies, trade-offs, and potential barriers for integrated action.

Results show potential for both social and ecological synergies. Ecosystem-based adaptation and biodiversity conservation show synergies with community forestry in the areas of local governance, and the relevance of social factors such as multi-stakeholder awareness, indigenous knowledge, land tenure security, community rule-making and ownership, and biodiversity-friendly livelihoods. Synergies between ecosystem-based adaptation and biodiversity conservation are mostly related to ecological factors such as benefits for biodiversity, ecosystem health, and corridor and buffer functions. Moreover, significant spatial synergies were observed between community forestry and biodiversity conservation areas.

Despite synergies, trade-offs exist and are mainly linked to social inequalities and the use of biodiversity-damaging practices. Spatial trade-offs occur between ecosystem-based adaptation and community forestry due to a lack of land tenure security in high-risk townships. Conceptual trade-offs between ecosystem-based adaptation and community forestry are mainly linked to inequality, lack of access, local power relations, and land tenure insecurity. Trade-offs between biodiversity and the other two are observed due to the use of monocultures, exotic species, and clear-cutting practices. Legal, social, and financial barriers have been identified for the implementation of synergetic actions, while proper facilitation, community rule-making, and biodiversity-friendly livelihoods are key enabling factors in achieving sustainable ecosystem restoration.

This research argues that ecosystem-based adaptation, biodiversity conservation, and community forestry benefit each other, highlighting that fostering those synergies is key for ecosystem restoration and conservation in the face of climate change, biodiversity loss, and poverty. Furthermore, the research stresses the need to consider community governance and biodiversity aspects in ecosystem-based adaptation to address societal challenges.

How to cite: Wuit Yee Kyaw, H. and Sebesvari, Z.: Assessment of synergies and trade-offs among ecosystem-based adaptation, biodiversity conservation and community forestry in Ayeyarwady Delta, Myanmar, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7415, https://doi.org/10.5194/egusphere-egu22-7415, 2022.

EGU22-7797 | Presentations | ITS4.4/ERE1.10

Governance and science implications of low environmental impact outdoors solar radiation management experiments 

Gideon Futerman, Martin Janssens, Iris de Vries, John Dykema, Andy Parker, and Hugh Hunt

There are many uncertainties surrounding solar radiation management (SRM), which cannot all be quantified and reduced using models, laboratory experiments or observations of natural analogs such as volcanic eruptions, ship tracks, or dust storms. While there is broad consensus both in- and outside the scientific community that better understanding of the climate system is beneficial to policy makers and society, the value of improved knowledge of SRM has been highly controversial. Yet, it is evident that SRM research can contribute to quantifying and reducing important uncertainties pertaining to fundamental knowledge on the workings of the Earth system, while also providing essential specific knowledge on positive and negative impacts of SRM to inform future decisions.

In 2016, a group of SRM experts gathered at the Institute for advanced sustainability studies in Potsdam for a workshop to formulate a set of low environmental impact SRM experiment proposals. We present these as a non-exhaustive set of possible experiments with no measurable environmental side effects that could provide valuable information that cannot be obtained from models or lab experiments. Both perturbative and non-perturbative experiments are proposed for different SRM methods including marine cloud brightening, stratospheric aerosol injection and cirrus cloud thinning.

It was found that in the time period between 2016 and now several of the research questions addressed in the experiment proposals have been answered by unrelated experimental environmental science studies, whereas no experimental studies have been carried out in the context of SRM. This finding shows that there is significant overlap in high priority research questions and outcomes of non-SRM and SRM environmental research. In addition, it shows that non-controversial environmental science experiments can provide similar SRM-relevant knowledge as dedicated SRM-experiments. Given that one of the main arguments against SRM research is the potential danger of the acquired knowledge, the finding that obtained knowledge of non-SRM and SRM experiments can be similar raises the question which effect the declared relationship to SRM on outdoors research proposal review and regulation should be.

How to cite: Futerman, G., Janssens, M., de Vries, I., Dykema, J., Parker, A., and Hunt, H.: Governance and science implications of low environmental impact outdoors solar radiation management experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7797, https://doi.org/10.5194/egusphere-egu22-7797, 2022.

EGU22-7868 | Presentations | ITS4.4/ERE1.10 | Highlight

The potential of urban soils for carbon neutral cities 

Esko Karvinen, Leena Järvi, Toni Viskari, Minttu Havu, Olivia Kuuri-Riutta, Pinja Rauhamäki, Jesse Soininen, and Liisa Kulmala

Urban areas are notable sources of atmospheric CO2 and cities are currently setting up climate programs with the aim of carbon neutrality in the near future. For example, two major cities in Southern Finland, Helsinki and Turku, have set their targets for 2035 and 2029, respectively. Carbon neutrality can be achieved by reducing carbon emissions, compensating them, and / or strengthening carbon sinks in urban vegetation and soils, the last of which is often deemed the most cost-efficient option. However, the current understanding of biogenic carbon cycling in urban environments is based on dynamics observed in more well-known ecosystems such as forests and agricultural lands. Urban ecosystems differ from non-urban areas in terms of temperature, precipitation and water cycling, pollution, and the level of human-induced disturbance. Thus, there is a need for observations on urban carbon to accurately model and estimate the carbon sinks and stocks in urban green space.

We aimed to monitor urban biogenic carbon cycle with an extensive field campaign carried out around the SMEAR III ICOS station in 2020–2022, accompanied by a few satellite sites around the capital region of Finland. In this presentation, we will show soil carbon pools and the dynamics of soil respiration at five different types of urban green space: a managed park lawn with and without trees, small urban forest, apple orchard, and street tree site. Soil respiration was measured with both regularly repeated manual chamber measurements and automatic chambers throughout two growing seasons. Soil carbon stock was estimated by soil samplings conducted in 2020 and 2021. We investigate the role of different drivers in soil CO2 emission at the various urban green space types and compare those to corresponding metrics measured in non-urban areas. In addition, we test the applicability of Yasso model to simulate the soil carbon dynamics in urban areas.

How to cite: Karvinen, E., Järvi, L., Viskari, T., Havu, M., Kuuri-Riutta, O., Rauhamäki, P., Soininen, J., and Kulmala, L.: The potential of urban soils for carbon neutral cities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7868, https://doi.org/10.5194/egusphere-egu22-7868, 2022.

There are many uncertainties surrounding solar radiation management (SRM), not in the least concerning the technological feasibility of hypothetical deployment scenarios. In sulfate stratospheric aerosol injection (SAI) scenarios, the radiative effectiveness of the aerosol is governed by its size distribution. In turn, aerosol size distribution is governed by the aerosol-precursor injection rate and injection plume conditions. Hence, uncertainties in cost and environmental impact of aircraft-based sulfate stratospheric aerosol injection (SAI) are primarily determined by uncertainties in injection plume conditions. In addition, the climate impacts and side effects of SAI as simulated by climate models depend on the prescribed initial conditions concerning aerosol characteristics, which also hinge on injection plume dynamics and microphysics.

Up to now, studies into aircraft-based SAI have used simplified plume models, which estimate plume dynamics with considerable uncertainty, and which do not account for effects of the local plume dynamics on the microphysical processes. Here, we work towards reducing this uncertainty by using full computational fluid dynamics representations of plume dynamics within simulations incorporating state-of-the-art microphysics models for the computation of aerosol size distributions in aircraft engine plumes.

In order to anchor our approach in the current literature, we first consider simplified problems with the objective of validating our methodology using existing results. These experiments confirm the attainability of favourable initial aerosol size distributions under roughly the same conditions as shown with other lower-fidelity models. However, our results retain disagreement with respect to previous studies concerning the exact aerosol growth behaviour, highlighting a sensitivity to model choice which may also explain apparent contradictions in those previous studies. 

We then consider a RANS computational fluid dynamic representation of an engine plume. This differs from the simplified plume representation in several ways, including realistic local variations in temperature, vorticity, and eddy viscosity resulting from the inflow determined using a state-of-the-art engine model. This representation is currently being employed in combination with the previously validated microphysical models to simulate realistic aerosol size evolutions for aircraft-based delivery scenarios.

We anticipate our results to (1) provide a higher-confidence foundation on which to base the discussion concerning technological feasibility of SAI-based SRM and (2) constrain the uncertainty range of inputs for model and impact studies, improving reliability of simulations of (desired and undesired) effects of potential SRM scenarios and thereby informing the scientific and public debate.  

How to cite: Tluk, A., de Vries, I., Janssens, M., and Hulshoff, S.: Towards higher fidelity simulations of aerosol growth in aircraft plumes for feasibility and impact assessment of sulfate stratospheric aerosol injection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7923, https://doi.org/10.5194/egusphere-egu22-7923, 2022.

 

Abstract

The challenges posed by the growth of urbanization in Egypt and the development of new cities play an essential role in applying the circular economy (CE) in the construction materials sector and the priorities for promoting sustainable construction activities in the future. Therefore, the construction sector has many adverse environmental impacts on energy and natural resources consumption. Starting from materials production, operation until disposal to landfills. Consequently, the industry is considered one of the most consumers of non-renewable resources and producer of CO2 emissions. On the other hand, applying Nature-based solutions (NbS) to enhance sustainability by protecting the ecosystems and maintaining economic benefits plays a vital role, especially for new Egyptian cities. The research aims to investigate the role of applying NbS for achieving CE in construction materials and eliminate its negative impact in the scope of three factors:  green building materials, waste management systems, renewable energy use. The current research attempts to answer how NbS can improve the CE and reduce environmental impacts of the construction materials sector. Therefore, the SOWT analysis investigated the strengths, opportunities, weaknesses, and threats of using the NbS strategies for three different construction sites in Egypt. Furthermore, the survey questionnaire was applied to identify the interactions between the parameters derived from 40 participants such as consultants, architecture engineers, civil engineers, site engineers, project managers and review the previous research efforts. As a result, a conceptual framework was created for the construction materials considering reduce, reuse, recycle, recovery, and disposal, to identify the impact of the implementation of NbS on achieving sustainable development strategies in the Egyptian construction sector. The result showed that the NbS could effectively promote the construction sector and achieve environmental and economic benefits, which consequently help the transition to CE. Therefore, there is the necessity for developing new sustainable policies and cooperation between public and private sectors to support the investments of sustainable strategies in the construction materials market and increase Egyptian society's awareness of the benefits of NbS in economic, environmental, and social aspects.

 Keywords, Nature-based solution, Construction materials, Circular Economy, Egypt 

How to cite: Marey, H., Szabó, G., and Kozma, G.: Using the Nature-Based Solutions for Applying Circular Economy for the Construction Materials Sector in Egypt, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7997, https://doi.org/10.5194/egusphere-egu22-7997, 2022.

EGU22-8269 | Presentations | ITS4.4/ERE1.10

Carbon cycle feedbacks in an idealized and a scenario simulation of carbon dioxide removal in CMIP6 Earth system models 

Ali Asaadi, Jörg Schwinger, Hanna Lee, Jerry Tjiputra, Vivek Arora, Roland Séférian, Spencer Liddicoat, Tomohiro Hajima, Yeray Santana-Falcòn, and Chris Jones

Limiting global warming to 1.5°C by the end of the century currently seems to be an ambitious target which will potentially be accompanied by a period of temperature overshoot. Achieving this climate goal might require massive carbon dioxide removal on large scales. Regardless of the feasibility of such removals, their effects on biogeochemical cycles and climate are not well understood. Changes in atmospheric CO2 concentration ([CO2]) and climate alter the CO2 exchange between the atmosphere and the underlying carbon reservoirs of land and ocean. Carbon-concentration and carbon-climate feedback metrics are useful tools for quantifying such changes in the carbon uptake by land and ocean, currently acting as a sink of carbon. We investigate the changes in carbon feedbacks under overshoot scenarios that could influence mitigation pathways to achieve the temperature goal. An ensemble of Coupled Model Intercomparison Project 6 (CMIP6) Earth system models that conducted an idealized ramp-up and ramp-down experiment (1pctCO2, with increasing and later decreasing [CO2] at a rate of 1% per year) has been used and compared against a scenario simulation involving negative emissions (SSP5-3.4-OS). The analyses are based on results from biogeochemically coupled (where land and ocean respond to rising CO2 levels but the climate is kept constant) and fully coupled simulations. For the positive emission phases, the model-mean global average carbon-climate feedback looks roughly similar between the SSP5-3.4-OS and the 1pctCO2 simulations, with a gradual monotonic decreasing behavior in absolute values which translates to a reduction in land and ocean uptakes. The carbon-concentration feedback in SSP5-3.4-OS is larger than in the 1pctCO2 simulations over the ocean. Both the ocean and land simulate an increase in carbon uptake during the ramp-up, while during the ramp-down, their uptakes show a hysteresis behavior. This feature is more prominent in the idealistic 1pctCO2 experiment with a higher [CO2] growth rate and without land use change effects than in the more realistic SSP5-3.4-OS scenario. Also, the time evolution of the global annual carbon-concentration and carbon-climate feedbacks seem to be very similar over natural land areas. In addition, changes in carbon fluxes are compared over the high latitude permafrost and non-permafrost regions in the Northern Hemisphere. Over land, the carbon-concentration feedback metric is decomposed into different terms to investigate the contributions from changes in live vegetation carbon pools and soil carbon pools. This indicates that the feedback is dominated by the residence time of carbon in vegetation and soil. Furthermore, building on previous studies, feedback metrics are also calculated using an alternative approach of instantaneous flux-based feedback metrics to further compare differences between models. The difference between the two approaches can be seen more obviously in the geographical distribution of the two feedbacks, especially for the negative emission phases of the 1pctCO2 experiment.

How to cite: Asaadi, A., Schwinger, J., Lee, H., Tjiputra, J., Arora, V., Séférian, R., Liddicoat, S., Hajima, T., Santana-Falcòn, Y., and Jones, C.: Carbon cycle feedbacks in an idealized and a scenario simulation of carbon dioxide removal in CMIP6 Earth system models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8269, https://doi.org/10.5194/egusphere-egu22-8269, 2022.

EGU22-8598 | Presentations | ITS4.4/ERE1.10 | Highlight

A systematic analysis of Horizon 2020 Nature-Based adaptation Solutions projects 

Mario Al Sayah, Pierre-Antoine Versini, and Daniel Schertzer

With the advances of the Nature-Based Solutions (NBS) concept, much attention is being given to its potential for climate change adaptation. Accordingly, Nature-Based adaptation Solutions (NBaS) have become central elements for action on climate. In the EU, the Horizon 2020 (H2020) program translates the ambition of positioning Europe as the world’s leader in NBS. In an effort to draw a comprehensive roadmap of these efforts, this study investigates 21 H2020 projects that utilize NBaS throughout different ecosystems. The main objectives of this study are to provide an inventory of current knowledge, to extract identified risks and knowledge limitations, and to propose future research orientations.

For this purpose, the CORDIS database was used to identify the relevant projects. Using the keyword nature-based solutions and through a rigorous search of research topics and programs, the following projects were retained (based on the existence of deliverables at the time of this study): CLEARING HOUSE, CLEVER Cities, Connecting Nature, DRYvER, EdiCitNet, EuPOLIS, FutureMARES, GrowGreen, NAIAD, Nature4Cities, NATURVATION, OPERADNUM, PHUSICOS, proGIreg, RECONECT, REGREEN, RENATURE, ThinkNature, UNaLab, Urban GreenUP and URBiNAT. Consequently, 137 deliverables were individually examined. Numerous findings were then obtained. These were divided into general and specific results.

In terms of general results, the definition of the NBS concept is still debated: some projects adopt the EC’s definition, others compare between the EC’s and the IUCN’s definition, while many reformulate their own. Second, the continental geographical gradient of pilot sites follows a dense South-West orientation in contrast to a less developed North-Eastern line. In terms of target ecosystems, 61% of the projects target the urban realm, while freshwater ecosystems come second. The coastal, natural and mountainous environments are the least addressed. The focus on urban systems makes most of the generated knowledge, designed solutions and monitoring methods more or less restricted to this realm, hence not necessarily applicable in other settings. Regarding climatic challenges, urban heat islands and floods came first. These are followed by sea level rise, intense precipitation, heat stress, storms, erosion and landslides.

In terms of specific findings, current knowledge and limitations were grouped in-depth per ecosystem (urban, freshwater, marine-coastal, mountainous, forest-natural, and agricultural) and per main research topics (climate change adaptation, risks of oversimplification, system complexity, uncertainty, the scale quandary, progress measuring-monitoring, and disservices). On this basis, several research perspectives were then proposed. Accordingly, interest in NBS-NBaS should extend beyond the urban ecosystem, while deeper knowledge on nature (the physical fundamentals of the N) in NBS-NBaS is needed. It is also important to understand if NBaS are intended to withstand weather change and/or climate change. For the implementation of wide-scale solutions, an extension beyond conservationism is needed, and a better accommodation of uncertainties is required. Therefore, understanding ecosystem tipping points, thresholds, and the resource efficiency of NBaS is primordial. Finally, it is crucial to acknowledge that both ecosystem development and climate change will keep progressing throughout the existence of NBaS. Therefore, the interacting co-evolution of ecosystems, NBaS and climate change should be further studied where their interaction could be forgotten.

How to cite: Al Sayah, M., Versini, P.-A., and Schertzer, D.: A systematic analysis of Horizon 2020 Nature-Based adaptation Solutions projects, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8598, https://doi.org/10.5194/egusphere-egu22-8598, 2022.

EGU22-8613 | Presentations | ITS4.4/ERE1.10 | Highlight

Spatial deployment of Nature-based Solutions to support carbon neutrality for 50 EU cities. 

Haozhi Pan, Jessica Page, Cong Cong, and Zahra Kalantari

Clear implementing plan for Nature-based solutions (NBS) beyond conceptualization is critical for successful in mitigating urban carbon emissions. In this paper, we demonstrate an approach to deploy nature-based solutions on high-resolution (25x25-meter) land use grid and its carbon emission reduction benefits for 50 major European Union (EU) cities. The deployment process takes 3 parts: 1) downgrading carbon emission data with larger spatial scales (10x10km GID data) to high-resolution cells using land use and socioeconomic data; 2) identifying opportunities and suitability of deploying NBS on these land use cells from a database with meta-analysis on the emission reduction potentials of different types of NBS; 3) Estimating total carbon emission potentials from spatial deployment and coupling of of multiple NBS with parametric simulation. Our results indicate that vast areas of urbanized and un-urbanized lands in EU cities can apply NBS to further mitigate carbon emissions. The reduction potential is huge and can contribute to a critical wedge of carbon neutrality.     

How to cite: Pan, H., Page, J., Cong, C., and Kalantari, Z.: Spatial deployment of Nature-based Solutions to support carbon neutrality for 50 EU cities., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8613, https://doi.org/10.5194/egusphere-egu22-8613, 2022.

EGU22-9109 | Presentations | ITS4.4/ERE1.10 | Highlight

Ecosystems for disaster risk reduction: what is the scientific evidence? 

Dr. Karen Sudmeier-Rieux and the Co-authors

Calls are rising for ecosystems, or green infrastructure, to complement engineered infrastructure for more effective disaster risk reduction and climate governance. Key international framework agreements, including the Sendai Framework for Disaster Risk Reduction 2015-2030 and the 2021 Glasgow Pact, noted the importance of ensuring the integrity of all ecosystems in addressing climate change and disaster risk. For example, vegetation can stabilize slopes to reduce mountain hazards and sand dunes, mangroves, and/or seagrasses can reduce the impacts of coastal storms.  However, there are gaps in the scientific evidence on this topic with few comprehensive, peer-reviewed studies to support decision-making on green infrastructure for disaster risk reduction.

This study systematically reviews 529 English-language articles published between 2000 and 2019. The objective was to catalogue the extent of knowledge and confidence in the role of ecosystems in reducing disaster risk. The main question this review addresses is: What is the evidence of the role that ecosystem services and/or functions contribute to disaster risk reduction? We modified the review methodology established by the Intergovernmental Panel on Climate Change to identify the robustness of evidence and level of agreement on the role of ecosystems in attenuating most common types of hazards.

The data demonstrate very robust links on the role of ecosystems in forest fire management, urban flooding and slope stabilization to reduce mountain hazards in a cost-effective manner. The study also highlights how ecosystems provide multiple services and functions in addition to regulating hazards, e.g., provisioning services for reducing vulnerability. The review highlights several research gaps, notably a geographic concentration of studies on urban areas of Europe and North America, and insufficient policy-relevant research on coastal, dryland, and watershed areas, especially in Asia, Africa and Latin America. To conclude, more attention should be paid to filling these research gaps and developing performance standards, which would provide policy-makers with increased confidence in investing in green infrastructure for disaster risk reduction and climate governance.

How to cite: Sudmeier-Rieux, Dr. K. and the Co-authors: Ecosystems for disaster risk reduction: what is the scientific evidence?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9109, https://doi.org/10.5194/egusphere-egu22-9109, 2022.

EGU22-10433 | Presentations | ITS4.4/ERE1.10 | Highlight

Suitability of soil carbon certificates for climate change mitigation 

Carsten Paul, Axel Don, Bartosz Bartkowski, Martin Wiesmeier, Sebastian Weigl, Steffi Mayer, Markus Steffens, André Wolf, Cenk Dönmez, and Katharina Helming

There is growing awareness of the role that agricultural soils can play for climate change mitigation. Agricultural management that increases soil organic carbon (SOC) stocks constitutes a nature-based solution for carbon dioxide removal. As soils store about twice the amount of carbon found in the atmosphere, even small relative increases could significantly reduce global warming.

However, increasing SOC requires management changes that come with costs to the farmers. In this regard, soil carbon certificates could provide a much-needed financial incentive: Farmers register their fields with commercial providers who certify any SOC increase achieved during a set period of time. The certificates are then sold on the voluntary carbon-offset market. We analysed the suitability of soil carbon certificates for climate change mitigation from the perspectives of soil sciences, agricultural management, and governance. In particular, we addressed questions of quantification, additionality, permanence, changes in emissions, leakage effects, transparency, legitimacy and accountability, as well as synergies and trade-offs with other societal targets.

Soil properties and the mechanisms by which carbon is stored in soils have strong implications for the assessment. Soils have a limited storage capacity, and SOC is not sequestered but its SOC stocks are the dynamic result of plant derived inputs and losses mainly in the form of microbial respiration. The higher the SOC stock, the higher the annual carbon inputs that is needed to maintain it. If carbon friendly management is discontinued, elevated SOC levels will therefore revert to their original level.

We found that while changes in agricultural management that increase SOC are highly desirable and offer multiple-co benefits with climate change adaptation, soil carbon certificates are unsuitable as a tool. They are unlikely to deliver the climate change mitigation they promise as certificate providers cannot guarantee permanence and additionality of SOC storage over climate relevant time-frames. Where the certified carbon storage is non-permanent or fails to meet criteria of additionality, the use of such certificates to advertise products as “carbon-neutral” may be construed as false advertising.

How to cite: Paul, C., Don, A., Bartkowski, B., Wiesmeier, M., Weigl, S., Mayer, S., Steffens, M., Wolf, A., Dönmez, C., and Helming, K.: Suitability of soil carbon certificates for climate change mitigation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10433, https://doi.org/10.5194/egusphere-egu22-10433, 2022.

Globally, there is clear evidence that unsustainable urbanisation and climate change are pressing challenges for our systems. Nature-based Solutions are starting to be considered as a mechanism to help underpin and tackle societal and global challenges such as biodiversity loss, ecosystem depletion, resource use or human and ecological well-being. Nevertheless, still disciplines are working separately and enabling the co-design of Nature-based Solutions to reach sustainable urban planning in cities is far to be considered for climate adaptation and climate neutrality in cities. Therefore, the study intends to overcome those research gaps mentioned. On the way to tackle those issues, the paper frames the necessity to align science, policy and society goals to reach a sustainable future and bring sectors together to ensure and help build an inclusive, healthy and a resilient world. The methodology is based on a systematic review process where we explore the state of the art on the matter. This paper intends to open the discussion of a holistic, systemic and comprehensive approach to mainstreaming Nature-based Solutions  and  presents a novel pathway for transdisciplinary climate and environmental planning action. A novel conceptualization; socio-ecological and environmental-economic framework for Nature-based Solutions action plan with defined key principles to enable the mainstreaming of nature-based solutions into policies and governance. The study recommends and proposes specific nature-based solutions strategies to underpin the lack of coherence that sometimes shows in some approaches when designing and planning cities, implementing policies for sustainable urban planning and design, facilitating ecosystem restoration and human well-being. To reach an environmentally, socially, economically, locally, ecologically and politically sustainable, circular and resilient Europe by 2030 to help deliver the global policy agendas and the European Green Deal and its strategies.

How to cite: Garcia Mateo, M. C. and Tillie, N.: Enabling the mainstreaming of nature-based solutions into policy-making and governance: Holistic and systemic approach and coherence across policies to build a sustainable, circular and resilient planet and tackle societal challenges, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10493, https://doi.org/10.5194/egusphere-egu22-10493, 2022.

EGU22-10608 | Presentations | ITS4.4/ERE1.10

Earth Climate Optimisation Productivity Island Array (ECOPIATM) 

John Allen, Calum Fitzgerald, and Lonnie Franks

A new nature based solution for capturing the entire man-made emission of carbon dioxide per year and locking it away in the deep ocean, called ECOPIATM, has been devised by a marine think tank, MyOcean Resources Ltd. This is a global solution to the anthropogenic climate change problem, without environmental downsides - it provides the fix. By using the characteristics of the Ocean, ECOPIATM removes the excess atmospheric CO2, de-acidifies the ocean’s waters, creates new sustainable fisheries, and most importantly allows the economies of the world to continue to grow and prosper.

ECOPIATM is able to address the anthropogenic climate change problem whilst having a positive global impact on economic growth. It enables continued economic growth for all nations by balancing the problem of excess atmospheric CO2 rather than following strategies that require a reduction in economic activities. Trying to reduce the amount of excess CO2 emitted by economies can be considered the biggest waste management issue the world has to solve; however current strategies have had trouble getting traction due to their negative impact on economic growth. 

By transillumination of the giant deserts of the Ocean, we can reduce the amount of atmospheric CO2 at the same time as de-acidifying the oceans, by empowering natural oceanic primary productivity simply through the provision of light. This allows ECOPIATM to be an effective CO2 waste management solution for the atmosphere. Rather than having to harm economic growth through difficult to achieve emissions reductions, companies can work with ECOPIATM to genuinely offset their atmospheric CO2 emissions, through photosynthetic CO2 uptake.

These enormous deserts of the sub-tropical open oceans, one seventh (~ 50 million km2) of the whole of the Earth’s ocean area, are reportedly getting bigger; with productive surface waters being replaced by an increase in the minimally productive surface waters of the oligotrophic gyres, at a rate of 0.8 million km2 per year . ECOPIATM in total only requires 0.2 million km2 of those gyres, just one quarter of the current increase in area per year.

Many of the nature based solutions have significant uncertainties that largely come about from the farming-like practise of changing the composition of the ‘soil’ or in this case the ocean waters. ECOPIATM takes a different approach, that of channelling light down to the depths where there are plenty of naturally determined nutrients and seed population, thus we are no longer ‘farming’ we are simply providing light. Furthermore, as there is no strict geo-engineering involved, ECOPIATM provides no mechanism for a preferential pressure on the naturally determined diversity of the light cultured ecosystem.

It has been noted by the UK's, HRH the Prince of Wales, amongst others, that the global anthropogenic climate change issue can only be solved by Industry. ECOPIATM stands out in that it is self-fundable, both in infrastructure and operational costs, via the use of Carbon Credits at today’s prices, allowing Industry to solve the issue in an affordable way.

How to cite: Allen, J., Fitzgerald, C., and Franks, L.: Earth Climate Optimisation Productivity Island Array (ECOPIATM), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10608, https://doi.org/10.5194/egusphere-egu22-10608, 2022.

EGU22-11878 | Presentations | ITS4.4/ERE1.10

N2O-emission risk assessment tool for nitrogenous fertilizer applications 

Henrik Vestergaard Poulsen, Sander Bruun, Cecilie Skov Nielsen, and Søren Kolind Hvid

Nitrous oxide (N2O) emitted from agricultural soils makes up a significant part of the collective agricultural greenhouse gas (GHG) emissions. These emission are to a large extent caused directly or indirectly by the application of nitrogenous fertilizer and there is a strong demand for mitigation strategies.

 

Nitrous oxide is produced in the soil in a range of different processes but mainly in microbial nitrification and denitrification. A number of factors exert influence on these microbial processes in the soil, most notably the oxygen concentration, availability of ammonium and nitrate, available organic matter and diffusivity, and fairly advanced process-based simulation models are often used in attempts to simulate the amount of N2O emitted. Here we propose using more a simplistic modelling approach to provide a novel risk assessment tool for nitrogenous fertilizer applications to be implemented in Danish farmers field management programmes.

 

At SEGES Innovation we have unique database access to field activity data from Danish farmers - e.g. crop sequence, fertilizer applications, residue handling, soil texture - covering more than 85 % of the Danish cultivated area. Based on these data and field specific climate data, a soil water balance model (Plauborg et al. 1995) and soil organic carbon model (Taghizadeh-Toosi et al. 2014) are running in daily timesteps for all fields in the database. These models provide, respectively, the daily level of WFPS in the soil and the organic matter turnover rate in the soil simulated during the weather forecast period of 10 days. Those two outputs are combined with a simulated soil temperature in a simplified version of the NGAS-model (Parton et al. 1996) to give a rough simulated N2O-emission for any planned fertilizer application throughout the weather forecast period.

 

The risk assessment tool exhibits this daily simulated N2O-emission as a risk evaluation of fertilizer application to the farmer in field management programmes, where future field activities are entered and logged. The objective is to lower the GHG emission by reducing the number of fertilizer applications right at peak N2O-emission conditions, once the farmers are presented with this information.    

How to cite: Vestergaard Poulsen, H., Bruun, S., Skov Nielsen, C., and Kolind Hvid, S.: N2O-emission risk assessment tool for nitrogenous fertilizer applications, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11878, https://doi.org/10.5194/egusphere-egu22-11878, 2022.

The deployment of carbon dioxide removal (CDR) processes is required, as well as strong and immediate emission reductions, to limit the global temperature increase “well below to 2°C above pre-industrial levels” as required by the Article 2 of the Paris Agreement.

Among the CDR processes, ocean alkalinity enhancement (OAE) allows to remove CO2 from the atmosphere and simultaneously to counteract the ongoing ocean acidification caused by the increased CO2 atmospheric concentration. In the framework of the DESARC (DEcreasing Seawater Acidification Removing Carbon) MARESANUS research project, different strategies to produce decarbonized slaked lime (SL), i.e. Ca(OH)2, and to discharge it in the seawater have been evaluated.

The feasibility and the potential of OAE were evaluated at the global scale and at the Mediterranean Sea basin scale. Two different logistic scenarios for the discharge of SL were analyzed: new dedicated ships, and partial load on modified existing dry bulk and container ships. The data on the existing global fleet of vessels and marine routes has been elaborated to assess the potential discharge of SL.

Through the life-cycle assessment methodology, the efficiency of removing CO2 from the atmosphere was evaluated, as well as other potential environmental impacts connected to SL production and transport. The “cradle-to-grave” approach has been applied to different configurations of the process, that consider both biomass gasification and the use of renewables as a source of energy for limestone calcination, as well as eventual CO2/H2 separation and CO2 storage.

The data collected for the life cycle inventory were mainly obtained from the preliminary design of the process and the scientific literature, as well as from the ecoinvent database. According to the environmental footprint method implemented in SimaPro software, sixteen impact categories for assessing the burdens on the environment and human are evaluated, with a particular focus on Climate change, Land use,  and Mineral and metals use.

The results show that for all the analyzed configurations, the process has a potential negative impact on the Climate change category, i.e. there is a benefit for the environment in terms of CO2 removal from the atmosphere. Since the avoided impacts are related to the source for hydrogen, the type of avoided source has a relevant role and is subjected to a sensitivity analysis.

Finally, the availability of limestone for the large-scale development of ocean alkalinisation have been evaluated, considering in particular the deposits of pure limestone near the coastlines, that  could minimize logistic and transportation activities.

Results show that pure carbonate potential resources are of several trillion tons and are not a constraint for the development of global-scale ocean liming. A large part of pure limestone resources is nearby the coastline, in areas with no or low vegetation cover, mainly in North Africa and Iran. Global limestone yearly production is similar to coal, and the required upscaling compared to the current extraction rate is far lower for limestone than for other materials considered for OAE, such as olivine, magnesite and brucite.

How to cite: Campo, F. P., Caserini, S., and Grosso, M.: Feasibility, potential and environmental impacts of ocean alkalinity enhancement for removing CO2 from the atmosphere and counteracting seawater acidification, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12467, https://doi.org/10.5194/egusphere-egu22-12467, 2022.

EGU22-12839 | Presentations | ITS4.4/ERE1.10 | Highlight

"The Arctic - the first step towards the terraformation of Mars. Experiences from northern Europe." 

Adrianna Rusek and Miłosz Huber

The Arctic is an area with unique climatic qualities on Earth. Located behind the Arctic Circle, the region is characterized by numerous phenomena such as polar day (in summer) and polar night (in winter), which affect the state of well-being of people living there. The numerous aurorae are examples of magnetic storms whose health effects are most pronounced in this region. Extreme temperatures can be recorded in these zones, especially in winter. At the same time, it is there that the environment shows great sensitivity to changing climatic conditions and human activities. A small increase in temperature can melt permafrost and methane clathrates. At this time, climate change affects the ecosystem of the plant and animal world. At the same time, it is in the Arctic that there are important deposits of energy resources, non-ferrous metals and others. In the Arctic regions there are trade routes connecting the continents (the so-called "Northern Road"). Growing interest in the Arctic contributes to its urbanization. This process is also important in a broader context. Many of the technologies that prove themselves in these harsh conditions will also be applicable in other climate zones. The Arctic is becoming a testing ground for human missions in harsh conditions, a way to survive in an unfavorable climate, and to test pro-environmental technologies. An important advantage of the Arctic is also its great similarity to the climatic conditions of the warmest zones on Mars. However, compared to Mars, planning engineering projects in the Arctic has many advantages. The presence of air at normal pressure, while not preventing the construction of airtight capsules, allows for easier evacuation of personnel in the event of a failure of life support systems.  Working people at various stations in the Arctic can be just as tested for the vulnerability of long periods of being in a small confined space. Nevertheless, there are also numerous localities in the Arctic where people lead relatively normal lives, the best example being northern Scandinavia, which is currently the most urbanized area beyond the Arctic Circle.  Their experience of living in the extreme conditions of the north, the problems of urban development and transportation, environmental protection and many other areas of life in this zone, can be an important source of information for other inhabitants of Earth and Mars.  Issues related to the problems of environmental protection and the fight against pollution in this climate zone will be just as relevant in other zones, where there are many more opportunities to use, for example, renewable energy sources. In the long run, building stable urbanized human settlements in the Arctic will become an example of human activity in the region of Mars and (perhaps) other regions of the Solar System. The authors present numerical data and possible scenarios of sustainable urbanization development in the Arctic based on selected examples of Scandinavian experience. They analyze which of them have universal character and are possible to apply also in other climatic conditions. 

How to cite: Rusek, A. and Huber, M.: "The Arctic - the first step towards the terraformation of Mars. Experiences from northern Europe.", EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12839, https://doi.org/10.5194/egusphere-egu22-12839, 2022.

EGU22-13010 | Presentations | ITS4.4/ERE1.10

Scientific evidence of the economic benefits of ecosystem-based disaster risk reduction and ecosystem-based climate change adaptation 

Marta Vicarelli, Michael Kang, Madeline Leue, Aryen Shrestha, David Wasielewski, Karen Sudmeier-Rieux, Jaroslav Mysiak, Simon Schütze, Michael Marr, Shannon McAndrew, and Miranda Vance

Ecosystems and ecosystem services are key to helping achieve reduction in disaster risk, sustainable development, and climate change adaptation, and this is now recognized by major international framework agreements (Convention on Biological Diversity, 2014; United Nations Office for Disaster Risk Reduction, Sendai Framework for Disaster Risk Reduction, 2015-2030). However, there is limited knowledge about the cost efficiency and socio-economic equity outcomes of Nature-based Solutions (NbS) compared to traditional engineered strategies.

In this study we developed a global database of more than 130 peer-reviewed studies, published between 2000 and 2020, that perform economic evaluations of NbS for Ecosystem-based Climate Adaptation (EbA) and Ecosystem-based Disaster Risk Reduction (Eco-DRR). Using meta-analysis techniques, we assess the existing scientific knowledge on the economic viability and performance of NbS for Eco-DRR and EbA, cataloguing outcomes both in terms of degree of economic efficiency and social equity. Our analysis includes multiple dimensions: geographic distribution of the published studies, types of ecosystems and ecosystem services evaluated, hazards and climate impacts analyzed, and economic methodologies used to perform economic efficiency evaluations (e.g., cost benefit analysis, stated/revealed preferences evaluation methods).

This study builds on a recent global assessment (Sudmeier-Rieux et al, 2021) that performs the first systematic review of Eco-DRR peer-reviewed studies across all disciplines. Their results show robustness of evidence and level of agreement on the role of ecosystems in attenuating 30 types of hazards, based on the assessment methodology established by the Intergovernmental Panel on Climate Change (IPCC). Our meta-analysis expands the 2021 review by evaluating the economic benefits associated with Eco-DRR and NbA approaches; by examining cost efficiency of Eco-DRR and NbA interventions compared to traditional engineering solutions; by performing equity assessments of the outcomes; and by studying how the NbS interventions reviewed contributed to the sustainable development goals (SDGs).

REFERENCE:

Sudmeier-Rieux, K., Arce-Mojica,T., Boehmer, H.J., Doswald, N., Emerton, L., Friess, D.A., Galvin, S., Hagenlocher, M., James, H., Laban, P., Lacambra, C., Lange, W., McAdoo, B.G., Moos, C., Mysiak, J., Narvaez, L., Nehren, U., Peduzzi, P1., Renaud, F.G., Sandholz, S., Schreyers, L., Sebesvari, Z., Tom, T., Triyanti, A., van Eijk, P., van Staveren, M., Vicarelli, M., Walz, Y. "Scientific evidence for ecosystem-based disaster risk reduction." Nature Sustainability (2021): 1-8. 

How to cite: Vicarelli, M., Kang, M., Leue, M., Shrestha, A., Wasielewski, D., Sudmeier-Rieux, K., Mysiak, J., Schütze, S., Marr, M., McAndrew, S., and Vance, M.: Scientific evidence of the economic benefits of ecosystem-based disaster risk reduction and ecosystem-based climate change adaptation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13010, https://doi.org/10.5194/egusphere-egu22-13010, 2022.

ITS5 – Impact of land use on food production and "natural" hazards

EGU22-451 | Presentations | ITS5.1/BG8.5

The influence of anthropogenic perturbations on the accumulation of polycyclic aromatic hydrocarbons in a lake system of Central Himalayas 

Vishal Kataria, Ankit Yadav, Al Jasil Chirakkal, Praveen Kumar Mishra, Sanjeev Kumar, and Anoop Ambili

Delineating the impact of various natural and anthropogenic drivers on the environment is a
paramount challenge in paleoenvironmental reconstruction. In the present study, we used faecal
biomarker (coprostanol) and polycyclic aromatic hydrocarbons (PAHs) in the lake sediments
alongside population census and meteorological parameters from Central Himalayas to delineate
the anthropogenic and natural signals of environmental changes for the past ~70 years (1950-
2018 AD). The resulting stress from the human activities is evident by an abrupt increase in the
coprostanol (0.1-5.5 mg/g) and pyrolytic PAHs concentration (1422-32077 ng/g) in the
sediments. Further, with the metric of population rise, economic and infrastructural development,
the composition of PAHs in the sediments has changed: the proportion of heavy molecular
weight PAHs increased from 57% to 86%, whereas low molecular weight PAHs decreased from
43% to 14% indicating an increase in the proportion of fossil fuels combustion and a decrease in
biomass burning sources. Based on reanalysis datasets, the computed temporal variation of
annual precipitation and annual temperature over the region clearly indicated that natural drivers
have no direct influence on the PAHs concentration and other biogeochemical parameters. In
addition, the hysplit back trajectory analysis provided evidences of the atmospheric deposition of
black carbon from the countryside biomass burning and petrogenic pollution from the nearby
megacities.

How to cite: Kataria, V., Yadav, A., Chirakkal, A. J., Mishra, P. K., Kumar, S., and Ambili, A.: The influence of anthropogenic perturbations on the accumulation of polycyclic aromatic hydrocarbons in a lake system of Central Himalayas, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-451, https://doi.org/10.5194/egusphere-egu22-451, 2022.

EGU22-4847 | Presentations | ITS5.1/BG8.5

Active fires during the COVID-19 lockdown period in the Llanos ecoregion, northern South America 

Santiago Valencia, Diver E. Marín, Juliana Mejía-Sepúlveda, Jerónimo Vargas, Natalia Hoyos, Juan F. Salazar, and Juan Camilo Villegas

Tropical savannas are the biome with the highest fire occurrences worldwide and play a key role in fire carbon emissions dynamics at regional to global scales. During the past decades, however, climate change and land use management have altered their fire regimes via fire suppression or ignition related to conservation and agricultural practices, and extreme weather conditions, among others. In particular, the ongoing COVID-19 pandemic has modified human activities in both urban and rural environments, and thus provides an opportunity to study the interactions between socio-economic and biophysical drivers of fires. Using satellite-based observations, we analyze the spatio-temporal patterns of active fires (AF, from MODIS-MCD14ML) in the Llanos ecoregion (northern South America between Colombia and Venezuela) during the COVID-19 lockdown period (mid-March to December 2020). We also examine fire carbon emissions (from GFED4s) as well as monthly precipitation (from CHIRPS), maximum temperature, and vapor pressure deficit (VPD, from TerraClimate). Our results show that 2020 was the year with the highest number of AF (>60%) and fire carbon emissions (>50%) compared to the 2001 to 2019 average. We found that these increases occur mainly during the peak of the fire season (March and April), which corresponds to the beginning of the lockdown period in Venezuela (March 17) and Colombia (March 20). Pixels (at 0.05° resolution) with significant positive AF anomalies (p<0.05) occur primarily in Venezuela and over grassland and agricultural land covers. A large proportion of these pixels interact with significant positive anomalies (p<0.05) in VPD (>70% of pixels) and maximum temperature (>50%) in March and April. Furthermore, our results highlight that the increase of AF could be associated not only with potential changes in land use management but also with weather patterns anomalies during the lockdown period in the Llanos ecoregion. 

How to cite: Valencia, S., Marín, D. E., Mejía-Sepúlveda, J., Vargas, J., Hoyos, N., Salazar, J. F., and Villegas, J. C.: Active fires during the COVID-19 lockdown period in the Llanos ecoregion, northern South America, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4847, https://doi.org/10.5194/egusphere-egu22-4847, 2022.

EGU22-5221 | Presentations | ITS5.1/BG8.5

Climate change impact on wildfires in the Canary Islands 

Judit Carrillo, Juan Carlos Pérez, Francisco Javier Expósito, Juan Pedro Díaz, and Albano González

The frequency and intensity of wildfires will be aggravated by climate change. Small islands are more vulnerable to these events due to their greater number of endemic species, little territory, and the isolation of their firefighting systems, among others. Climate projections of Fire Weather Index (FWI) have been accomplished using as boundary conditions the results provided by the CMIP5 initiative, using Weather Research and Forecasting (WRF) model, with a spatial resolution of 3x3km, until the end of the century, and two Representative Concentration Pathways (RCPs), 4.5 and 8.5. The length of the fire season is expected to increase up to 74 days per year and the area with high risk could increase by 43%. In addition, FWI is projected to increase with altitude, mainly due to increasing temperature and decreasing precipitation, which are more pronounced at higher elevations.

How to cite: Carrillo, J., Pérez, J. C., Expósito, F. J., Díaz, J. P., and González, A.: Climate change impact on wildfires in the Canary Islands, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5221, https://doi.org/10.5194/egusphere-egu22-5221, 2022.

Previous studies have suggested that the behaviour of policymakers can be influenced either for personal gain or for electorate pleasing. However, politicians’ role and incentives in the determination of fire regimes have been largely ignored in research advocating for the adoption of effective fire adaptation and prevention strategies. In this context, understanding the drivers of wildfires is pivotal for developing and promoting effective fire prevention strategies. This empirical analysis investigates whether there is a significant change in wildfire occurrence around the gubernatorial election years and whether the change is consistent with the incumbent candidate running for re-election. To assess the impact of electoral cycles on wildfire occurrence, I estimate a Quasi-Maximum Likelihood (QML) Poisson fixed-effects model.

How to cite: Piroli, E.: Do politicians’ reelection incentives affect wildfires occurrence?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5927, https://doi.org/10.5194/egusphere-egu22-5927, 2022.

EGU22-6811 | Presentations | ITS5.1/BG8.5

Mapping socio-ecological vulnerability of tropical peat landscape fires 

Janice Ser Huay Lee, Yuti Ariani Fatimah, Stuart William Smith, Nur Estya Rahman, Laely Nurhidayah, Budi Wardhana, Asmadi Saad, Zaenuddin Prasojo, Feroz Khan, Maple Sifeng Wu, Xingli Giam, Kwek Yan Chong, Laura Graham, and David Lallemant

Fire represents a mainstay for rural communities managing tropical landscapes. However, increasing uncontrolled fires in tropical landscapes because of land use and climate change pose a major threat to livelihoods, public health, and ecosystems. Peatlands in Southeast Asia are one such example of tropical landscapes that experience high flammability due to clearance of forests and excessive drainage for agriculture and forestry. The degradation of tropical peatland ecosystems increases their susceptibility to landscapes fires, which in turn increase the vulnerability of people and peatland conditions to future fires. To identify locations of tropical peatlands and surrounding communities that are vulnerable to fires, we conducted a socio-ecological vulnerability assessment and mapped the socio-ecological vulnerability of tropical peatlands to fires. We used an inductive approach to conceptualize and operationalize vulnerability and its associated dimensions of exposure, sensitivity, and adaptive capacity through empirical case studies in the literature, with a focus on tropical peatlands and fires in Indonesia. We present preliminary results of our mapped social and ecological vulnerability of Indonesia’s tropical peatlands to peat landscape fires. This would allow policymakers to identify places which display both high ecological and social vulnerability to fires and channel aid and mitigation efforts where they are most urgently needed.

How to cite: Lee, J. S. H., Fatimah, Y. A., Smith, S. W., Rahman, N. E., Nurhidayah, L., Wardhana, B., Saad, A., Prasojo, Z., Khan, F., Wu, M. S., Giam, X., Chong, K. Y., Graham, L., and Lallemant, D.: Mapping socio-ecological vulnerability of tropical peat landscape fires, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6811, https://doi.org/10.5194/egusphere-egu22-6811, 2022.

Fire plays an important role in the earth system. While some aspects of fire including burnt area and fire frequencies have been extensively studied; fire carbon emissions, which could exert significant influence on the carbon cycle and a wide range of geophysical processes relating to ecosystem services and human well-being, are relatively understudied in terms of its global trends and drivers. We investigated fire emission trends from 2001 to 2019 at global and regional scales using total carbon emission data from the fourth generation Global Fire Emission Database (GFED4s). We identified geophysical and anthropogenic drivers for fire emission trends for regions defined by geographical regions and biomes with a causal model; and quantified driver importance with machine learning models by estimating their impact on fire emissions. We observed an insignificant global fire emission trend; mainly caused by conflicting fire emission trends in tropical savanna/grasslands and boreal forests. The two biomes were the largest sources for global fire emissions. Tropical savanna/grasslands contributed 60% to global fire emissions and showed a decreasing fire emission trend at a rate of -9.7±1.4 ×1012 gC/year; boreal forests contributed around 8% and increased at a rate of 7.4±2.2 × 1012 gC/year (rates estimated by Huber robust regression). At the regional scale, we found that fire emission trends were driven by geophysical factors for all regions. Anthropogenic interventions only caused changes in fire emissions in limited regions, including all biomes in Africa, and some biomes in Boreal Asia, Central Asia and North America. Decreasing fire emission trends in tropical savanna/grasslands mainly occurred in Africa; and the most dominant drivers were anthropogenic interventions, namely agriculture expansion and the subsequent declines in vegetation. Increasing fire emissions from boreal forests largely came from Boreal Asia, where anthropogenic interventions were also important drivers, and climatic drivers relating to moisture, drought, and temperature played a vital role as well, especially moisture. Vegetation indices were also identified as drivers for this region but were the least important ones. Our results suggested future fire emission trend for boreal forests in Boreal Asia could be highly vulnerable to climate change. It is possible that fire emissions in this region continue to increase if the climate becomes drier since drivers relating to moisture were highly important. On the other hand, further decrease for fire emissions in African savanna/grasslands is limited by the already shrunk vegetation. Therefore, at the global scale, risks of increasing fire carbon emissions are rather high. Increasing carbon emissions and the slow recovery of carbon sink capacity in burnt forests imply long-term net carbon source from boreal forests, which could be challenging for climate mitigation.

How to cite: Wu, S. and Lee, J. S. H.: Geophysical and anthropogenic drivers for global and regional fire emission trends from 2001 to 2019, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6879, https://doi.org/10.5194/egusphere-egu22-6879, 2022.

EGU22-7515 | Presentations | ITS5.1/BG8.5

Cross-Country Risk Quantification of Extreme Wildfires in Mediterranean Europe* 

Sarah Meier, Eric Strobl, Robert J.R. Elliott, and Nicholas Kettridge

 We estimate the country-level risk of extreme wildfires defined by burned area (BA) for Mediterranean Europe and carry out a cross-country comparison. To this end we avail of the European Forest Fire Information System (EFFIS) geospatial data from 2006-2019 to perform an extreme value analysis. More specifically, we apply a point process characterization of wildfire extremes using maximum likelihood estimation. By modeling covariates, we also evaluate potential trends and correlations with commonly known factors that drive or affect wildfire occurrence, such as the Fire Weather Index as a proxy for meteorological conditions, population density, land cover type, and seasonality. We find that the highest risk of extreme wildfires is in Portugal (PT), followed by Greece (GR), Spain (ES), and Italy (IT) with a 10-year BA return level of 50'338 ha, 33'242 ha, 25'165 ha, and 8'966 ha, respectively. Coupling our results with existing estimates of the monetary impact of large wildfires suggests expected losses of 162-230 million € (PT), 81-96 million € (ES), 41-126 million € (GR), and 18-34 million € (IT) for such 10-year return period events.

How to cite: Meier, S., Strobl, E., Elliott, R. J. R., and Kettridge, N.: Cross-Country Risk Quantification of Extreme Wildfires in Mediterranean Europe*, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7515, https://doi.org/10.5194/egusphere-egu22-7515, 2022.

EGU22-7970 | Presentations | ITS5.1/BG8.5

Dynamics of fires, harvest and carbon stocks in U.S. forests 1926-2017 

Andreas Magerl, Simone Gingrich, Sarah Matej, Christian Lauk, Geoffrey Cunfer, Cody Yuskiw, Matthew Forrest, Stefan Schlaffer, and Karlheinz Erb

Human-fire interactions have always played an important role in the United States of America. Important processes include land clearing with fires in the course of agricultural expansion and development of the West during the 19th century, large-scale fire suppression in the first half of the 20th century and recent “mega-fire” events in California. Strong regional divergences occurred: Fire regimes in the Eastern U.S. were significantly altered due to settlement and land-use changes over the past 100 years, resulting in reduced severity of fire events. In the West the area extent and severity of wildfires has increased, especially in recent decades, arguably due to more frequent climatic extreme events. Although the historical fire narrative in the U.S. has been studied in numerous publications, the links between these developments and changes in the socio-metabolic system i.e., changes in resource use, and consumption, are to our knowledge less well understood.

In this study we investigate the influence of anthropogenic alteration of fire regimes on forest biomass Carbon stocks in comparison to forest uses, i.e., the extraction of woody biomass and forest grazing on multiple spatial scales. We develop a long-term reconstruction of biomass burned in forests on the national, regional, and state level based on statistical and remote-sensing data. We describe and examine historical differences between fire regimes in the Eastern and Western United States in connection with human use of forest for the period 1940-2017. Using panel data analysis, we investigate the diverse connection between forest change, socio-metabolic processes, natural disturbances (i.e., wildfires), and associated human fire control on various spatial and temporal scales. With this study we aim to contribute to a better understanding of the underlying socio-metabolic drivers and accompanying processes of altered forest fire regimes.

How to cite: Magerl, A., Gingrich, S., Matej, S., Lauk, C., Cunfer, G., Yuskiw, C., Forrest, M., Schlaffer, S., and Erb, K.: Dynamics of fires, harvest and carbon stocks in U.S. forests 1926-2017, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7970, https://doi.org/10.5194/egusphere-egu22-7970, 2022.

EGU22-9354 | Presentations | ITS5.1/BG8.5

Ranking the sensitivity of climate variables and FWI sub-indices to global wildfire burned area 

Manolis G. Grillakis, Apostolos Voulgarakis, Anastasios Rovithakis, Konstantinos Seiradakis, Aristeidis Koutroulis, Robert Field, Matthew Kasoar, Athanasios Papadopoulos, and Mihalis Lazaridis

Wildfires are integral parts of ecosystems but at the same time they consist a threat for manmade and natural environments. Variability in the area burned by wildfires has been largely attributed to weather and climate drivers, hence fire danger indices such as the Canadian Fire Weather Index (FWI) uses solely climate variables. The FWI uses four climate variables (precipitation, temperature, wind and relative humidity), to estimate two sub-indices, one for the wildfire initial spread danger - the initial spread index, and one that accounts for the longer-term drought effects on the fire danger - the buildup index, from which the FWI is finally assessed. Here, we establish correlations between the individual climate variables, FWI and its subindices, with observed GFED monthly burned area, for each one of the 14 GFED pyrographic region, at a global scale. The correlations are established on aggregated by the size of burned area data, to reduce the effect of other smaller scale climate effects, as well as other socioeconomic factors such as fire suppression activities, etc. The established correlations are then used to estimate the relative sensitivity of the area burned, to each climate variable and FWI component. The analysis is repeated for different burned area land use types, i.e. forest areas, non-forest areas as well as their combination. Our results indicate the relative importance of the four climate variables, as well as the two sub-indices of FWI index, for each GFED region. The results highlight the significance of temperature and relative humidity to the variability of area burned, in many regions, globally. This work contributes to a better understanding of the climate drivers of global wildfire activity.

 

This work is supported by CLIMPACT - National Research Network on Climate Change and its Impacts project, financed by the Public Investment Program of Greece and supervised by General Secretariat for Research and Technology (GSRT); and by the Leverhulme Centre for Wildfires, Environment, and Society through the Leverhulme Trust, grant number RC-2018-023.

How to cite: Grillakis, M. G., Voulgarakis, A., Rovithakis, A., Seiradakis, K., Koutroulis, A., Field, R., Kasoar, M., Papadopoulos, A., and Lazaridis, M.: Ranking the sensitivity of climate variables and FWI sub-indices to global wildfire burned area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9354, https://doi.org/10.5194/egusphere-egu22-9354, 2022.

EGU22-13091 | Presentations | ITS5.1/BG8.5

Seasonality in the Anthropocene: On the construction of Southeast Asia’s 'haze season' in the media 

Felicia Liu, Vernon Yian, John Holden, and Thomas Smith

Widespread burning of tropical peatlands across regions of Malaysia and Indonesia is now considered to be an annual event in equatorial Southeast Asia. The fires cause poor air quality (‘haze’) across the region, affecting the health of millions, and leading to transboundary disputes between places that burn and the places downwind that suffer the smoke plumes from the burning. We seek to investigate the emerging social construction of a new season in the region – the ‘haze season’.

Seasons are a social construct that enables societies to organise their livelihoods around the expectation of recurring phenomena. They are not defined ‘objectively’ by observed patterns of relevant variables (e.g. satellite fire detections or air quality indices), but are instead the product of deliberation and contestation of which phenomena to observe, and how to normalise such phenomena to reflect and serve matters of concern to particular societies.

The emergence of a new season may imply the normalisation of the phenomena, which may carry both positive and negative implications for progress towards adapting to and/or mitigating haze and the fires that drive the pollution crisis – a good example of a socio-environmental feedback. In this paper, we seek to answer three research questions:

  • When is the ‘haze season’ (onset, duration)?
  • How is ‘haze season’ portrayed in the media? and
  • What role does the haze ‘seasonality’ play in shaping people’s behaviour towards haze? Does the new season play a role in normalisation (e.g. densensitisation), adaptation (e.g. wearing masks, indoor activities) and mitigation (e.g. fighting haze, activism) behaviours?

To answer these questions, we analysed news articles published in Indonesia, Malaysia and Singapore through the Factiva database.

First, we identified the monthly distribution of newspaper articles mentioning ‘haze’ and ‘haze season’. Then, we identified keywords associated with ‘haze’ and ‘haze season’ by comparing the words found in the articles mentioning each concept with a corpus of words drawn from general usage in the year 2020. This is followed by a keyness analysis between two corpora of articles, namely articles that mention only ‘haze’ and articles that mention ‘haze season’. By doing so, we compare the differences between two distinct textual corpora in order to discover divergent themes. Finally, we used structural topic modelling (STM) to identify topic clusters. 

We find a strong distinction between the themes of articles that are written about the ‘haze season’ and articles that simply refer to the haze problem alone. Articles that mention ‘haze’, but not ‘haze season’ focus on the root causes of the haze crisis – peatland fires in Indonesia, oil palm plantations, deforestation – as well as geopolitical cooperation to prevent fires (e.g. through ASEAN). Both our keyness and STM analysis revealed that the ‘haze season’ articles have strong association with the effects of the haze crisis, particularly during the haze season months – poor air quality, pollution standards, mask-wearing, air filtration – suggesting that seasonality plays a role in adaptation behaviour. Outside of the haze season months, articles mentioning the new season focus more on haze mitigation and associated political action.

How to cite: Liu, F., Yian, V., Holden, J., and Smith, T.: Seasonality in the Anthropocene: On the construction of Southeast Asia’s 'haze season' in the media, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13091, https://doi.org/10.5194/egusphere-egu22-13091, 2022.

Under the current changing climate and social governance conditions, wildfires occurrence in Latin America has become a critical issue, trespassing academic and technical disputes, and reaching sensible socio-political arenas. Developing a new vision and capacities for the integral and intersectoral management of wildfires instead of only fighting them requires the inclusion of multiple perspectives, actors and the rescue of the adaptive knowledge and practices of local communities that inhabit natural spaces. This paper summarizes the main results and advances achieved during more than 20 years of learning and working with the Pemón Indigenous peoples in northern Amazonia and the escalation towards new fire management policies in Venezuela. Our results reveal a sophisticated Indigenous knowledge system on using fire in the main subsistence activities, especially shifting cultivation and collaborative burning practices at the savanna-forests transition to protect forests from catastrophic wildfires. In addition, long-term fire experiments demonstrated that fire exclusion practices promote more severe wildfires by fuel accumulation, enhanced by the drier and warmer weather conditions. Through the inclusion of Indigenous peoples, firefighters, public officials and academics in field research and joint experimentation, as well as in debates and dialogues on socio-ecological aspects, a paradigm shift was successfully negotiated of fire that values the relevance of the ancient Pemón culture in Venezuela in the sustainable management of resources, as well as adaptation and mitigation capacity to climate change. Currently, these experiences are being capitalized to create a national integrated fire management policy preserving the same participatory, intercultural and intersectoral principles. 

How to cite: Alejdanra Bilbao, B.: Experiences and lessons learned in the construction of a new paradigm of integrated fire management in Venezuela., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13420, https://doi.org/10.5194/egusphere-egu22-13420, 2022.

EGU22-13459 | Presentations | ITS5.1/BG8.5

Relationships Between Building Features and Wildfire Damage in California, USA and Pedrógão Grande, Portugal 

Simona Dossi, Birgitte Messerschmidt, Luis Mario Ribeiro, Miguel Almeida, and Guillermo Rein

Inhabited areas adjacent to wildland, known as the wildland-urban interface (WUI), often experience wildfire damage. Although knowledge on external fire protection of buildings has greatly advanced through post-fire inspections and experimental studies, the intercomparison between studies in different regions is lacking. Here we quantitatively compare two large post-fire building damage inspection databases: the 2013-2017 California Department of Forestry and Fire Protection damage inspection in the USA, and the 2017 Pedrógão Grande Fire Complex post-fire investigation in Portugal. We compare the relationship between different building features and wildfire damage, and propose the Wildfire Resistance Index (WRI), a preliminary wildfire risk index applied to rural buildings. Results indicate that exterior walls, windows, and vent screens have the strongest correlation to damage level in California, and exterior walls and preservation level in Portugal. The correlation strength indicates each feature’s relative importance in protecting the building from wildfire damage. The WRI value corresponds to the building’s net number of fire-resistant features and has an inversely proportional relationship to the percent of destroyed buildings. In California 93% of buildings with a WRI of -0.4 were destroyed, compared to 73% of buildings with WRI of 1; in Portugal 75% of buildings with WRI of 0.5 were highly damaged or destroyed, decreasing to 44% of buildings with a WRI of 1. Results indicate that the amount of fire-resistant building features directly relates to the building’s damage probability, and that the WRI can be used to quantify building wildfire resistance.

How to cite: Dossi, S., Messerschmidt, B., Ribeiro, L. M., Almeida, M., and Rein, G.: Relationships Between Building Features and Wildfire Damage in California, USA and Pedrógão Grande, Portugal, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13459, https://doi.org/10.5194/egusphere-egu22-13459, 2022.

CC BY 4.0