ESSI – Earth & Space Science Informatics

EGU22-6482 | Presentations | MAL16 | Ian McHarg Medal Lecture

On Machine Learning from Environmental Data 

Mikhail Kanevski

Geo- and environmental sciences produce a wide variety and numerous data which are extensively used both in fundamental research on Earth processes and in important real-life decision-making. Most natural phenomena are non-linear, multivariate, highly variable and correlated at many spatio-temporal scales. Analysis and treatment of such complex data and their integration/assimilation with science-based models is a difficult problem. Contemporary machine learning (ML) proposes an important set of effective approaches to address this problem at all phases of the study.

Nowadays, Geosciences are one of the major customers of ML ideas and technologies. To a large degree, it is connected to the local and global challenges facing humanity: sustainable development, biodiversity, social and natural hazards and risks, meteo- and climate forecasting, remote sensing Earth observation, etc. Despite being theoretically a universal modelling tool, the success of ML applications significantly depends on the problem formulation, quantity and quality of data and objectives of the study. Therefore, an efficient application of ML demands a good knowledge of the phenomena under study and a profound understanding of learning algorithms which can be achieved in close collaboration between experts in the corresponding domains.

In the current presentation, the study of geo- and environmental data using different machine learning algorithms is reviewed. A problem-oriented approach, which follows a generic data-driven methodology, is applied. The methodology consists of several important steps, in particular, optimization of monitoring and data collection, comprehensive exploratory data analysis and visualization, feature engineering and relevant variables selection, modelling with careful validation and testing, explanation and communication of the results. Advanced experimentation with data by using different supervised and unsupervised ML algorithms helps in better understanding of original data and constructed input feature space, obtaining more reliable and robust results and making intelligent decisions. The presentation is accompanied by simulated and real data case studies from natural hazards (avalanches, forest fires, landslides), environmental risks (pollution) and renewable energy assessment. In conclusion, some general remarks and future perspectives are discussed.

 

How to cite: Kanevski, M.: On Machine Learning from Environmental Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6482, https://doi.org/10.5194/egusphere-egu22-6482, 2022.

EGU22-12895 | Presentations | MAL16 | ESSI Division Outstanding ECS Award Lecture

Artificial Intelligence and Earth System Modeling - revisiting Research of the Past and Future 

Christopher Kadow, David M. Hall, Uwe Ulbrich, Igor Kröner, Sebastian Illing, and Ulrich Cubasch

Today's climate science is being driven by IT more than ever. Earth system models on high-performance computers (HPC) are common tools for researching the past and projecting it into the future. In addition to that, statistical modelling is reborn thanks to modern computer architectures equipped with artificial intelligence (from ensemble to deep learning). Future advances in machine learning will also shape climate research through analysis tools, prediction techniques, signal and event classification, post-processing, Model Output Statistics (MOS), evaluation and verification, etc. This presentation will look at nowadays research about the future (part one) and the past (part two) of our climate system using AI/ML ideas and technologies in combination with numerical climate models - from two publications accordingly. A special focus will be on the importance of climate science, where the needs are, and how to choose the AI/ML hammer wisely:

(1) FUTURE: Derived from machine (ensemble) learning and bagging, a new hybrid climate prediction technique called 'Ensemble Dispersion Filter' is developed. It exploits two important climate prediction paradigms: the ocean's heat capacity and the advantage of the ensemble mean. The Ensemble Dispersion Filter averages the ocean temperatures of the ensemble members every three months, uses this ensemble mean as a restart condition for each member, and further executes the prediction. The evaluation  shows that the Ensemble Dispersion Filter results in a significant improvement in the predictive skill compared to the unfiltered reference system. Even in comparison with prediction systems of a larger ensemble size and higher resolution, the Ensemble Dispersion Filter system performs better. In particular, the prediction of the global average temperature of the forecast years 2 to 5 shows a significant skill improvement.

Kadow, C., Illing, S., Kröner, I., Ulbrich, U., and Cubasch, U. (2017), Decadal climate predictions improved by ocean ensemble dispersion filtering, J. Adv. Model. Earth Syst., 9, 11381149, doi:10.1002/2016MS000787. 

(2) PAST: Nowadays climate change research relies on climate information of the past. Historic climate records of temperature observations form global gridded datasets like HadCRUT4, which is investigated e.g. in the IPCC reports. However, record combining data-sets are sparse in the past. Even today they contain missing values. Here we show that artificial intelligence (AI) technology can be applied to reconstruct these missing climate values. We found that recently successful image inpainting technologies, using partial convolutions in a CUDA accelerated deep neural network, can be trained by 20CR reanalysis and CMIP5 experiments. The derived AI networks are capable to independently reconstruct artificially trimmed versions of 20CR and CMIP5 in grid space for every given month using the HadCRUT4 missing value mask. The evaluation reaches high temporal correlations and low errors for the global mean temperature.

Kadow, C., Hall, D.M. & Ulbrich, U. Artificial intelligence reconstructs missing climate information. Nat. Geosci. 13, 408–413 (2020). https://doi.org/10.1038/s41561-020-0582-5

How to cite: Kadow, C., Hall, D. M., Ulbrich, U., Kröner, I., Illing, S., and Cubasch, U.: Artificial Intelligence and Earth System Modeling - revisiting Research of the Past and Future, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12895, https://doi.org/10.5194/egusphere-egu22-12895, 2022.

ESSI1 – Next Generation Analytics for Scientific Discovery: Data Science, Machine Learning, AI

EGU22-1346 | Presentations | ESSI1.2

Enhance pluvial flood risk assessment using spatio-temporal machine learning models 

Andrea Critto, Marco Zanetti, Elena Allegri, Anna Sperotto, and Silvia Torresan

Extreme weather events (e.g., heavy rainfall) are natural hazards that pose increasing threats to many sectors and across sub-regions worldwide (IPCC, 2014), exposing people and assets to damaging effects. In order to predict pluvial flood risks under different spatio-temporal conditions, three generalized Machine Learning models were developed and applied to the Metropolitan City of Venice: Logistic Regression, Neural Networks and Random Forest. The models considered 60 historical pluvial flood events, occurred in the timeframe 1995-2020. The historical events helped to identify and prioritize sub-areas that are more likely to be affected by pluvial flood risk due to heavy precipitation. In addition, while developing the model, 13 triggering factors have been selected and assessed: aspect, curvature, distance to river, distance to road, distance to sea, elevation, land use, NDVI, permeability, precipitation, slope, soil and texture. A forward features selection method was applied to understand which features better face spatio-temporal overfitting in pluvial flood prediction based on AUC score. Results of the analysis showed that the most accurate models were obtained with the Logistic Regression approach, which was used to provide pluvial flood risk maps for each of the 60 major historical events occurred in the case study area. The model showed high accuracy and most of the occured events in the Metropolitan City of Venice have been properly predicted, demostrating that Machine Learning could substantially improve and speed up disaster risk assessment and mapping helping in overcoming most common bottlenecks of physically-based simulations such as the computational complexity and the need of large datasets of high-resolution information.

How to cite: Critto, A., Zanetti, M., Allegri, E., Sperotto, A., and Torresan, S.: Enhance pluvial flood risk assessment using spatio-temporal machine learning models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1346, https://doi.org/10.5194/egusphere-egu22-1346, 2022.

EGU22-3131 | Presentations | ESSI1.2

Language model for Earth science for semantic search 

Rahul Ramachandran, Muthukumaran Muthukumaran Ramasubramanian, Prasanna Koirala, Iksha Gurung, and Manil Maskey

Recent advances in technology have transformed the Natural Language Technology (NLT) landscape, specifically, the use of transformers to build language models such as BERT and GPT3. Furthermore, it has been shown that the quality and the domain-specificity of input corpus to language models can improve downstream application results. However, Earth science research has minimal efforts focused on building and using a domain-specific language model. 

We utilize a transfer learning solution that uses an existing language model trained for general science (SciBERT) and fine-tune it using abstracts and full text extracted from various Earth science journals to create BERT-E (BERT for Earth Science). The training process utilized the input of 270k+ Earth science articles with almost 6 million paragraphs. We used Masked Language Modeling (MLM) to train the transformer model. MLM works by masking random words in the paragraph and optimizing the model for predicting the right masked word. BERT-E was evaluated by performing a downstream keyword classification task, and the performance was compared against classification results using the original SciBERT Language Model. The SciBERT-based model attained an accuracy of 89.99, whereas the BERT-E-based model attained an accuracy of 92.18, showing an improvement in overall performance.

We investigate employing language models to provide new semantic search capabilities for unstructured text such as papers. This search capability requires utilizing a knowledge graph generated from Earth science corpora with a language model and convolutions to surface latent and related sentences for a natural language query. The sentences in the papers are modeled in the graph as nodes, and these nodes are connected through entities. The language model is used to give sentences a numeric representation. Graph convolutions are then applied to sentence embeddings to obtain a vector representation of the sentence along with combined representation of the  surrounding graph structure. This approach utilizes both the power of adjacency inherently encoded in graph structures and latent knowledge captured in the language model. Our initial proof of concept prototype used SIMCSE training algorithm (and the tinyBERT architecture) as the embedding model. This framework has demonstrated an improved ability to surface relevant, latent information based on the input query. We plan to show new results using the domain-specific BERT-E model.

How to cite: Ramachandran, R., Muthukumaran Ramasubramanian, M., Koirala, P., Gurung, I., and Maskey, M.: Language model for Earth science for semantic search, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3131, https://doi.org/10.5194/egusphere-egu22-3131, 2022.

EGU22-3855 | Presentations | ESSI1.2

CGC: an open-source Python module for geospatial data clustering 

Ou Ku, Francesco Nattino, Meiert Grootes, Emma Izquierdo-Verdiguier, Serkan Girgin, and Raul Zurita-Milla

With the growing ubiquity of large multi-dimensional geodata cubes, clustering techniques have become essential to extracting patterns and creating insights from data cubes. Aiming to meet this increasing need, we present Clustering Geodata Cubes (CGC): an open-source Python package designed for partitional clustering of geospatial data. CGC provides efficient clustering methods to identify groups of similar data. In contrast to traditional techniques, which act on a single dimension, CGC is able to perform both co-clustering (clustering across two dimensions e.g., spatial and temporal) and tri-clustering (clustering across three dimensions e.g., spatial, temporal, and thematic), as well as of subsequently refining the identified clusters. CGC also entails scalable approaches that suit both small and big datasets. It can be efficiently deployed on a range of computational infrastructures, from single machines to computing clusters. As a case study, we present an analysis of spring onset indicator datasets at continental scale.

How to cite: Ku, O., Nattino, F., Grootes, M., Izquierdo-Verdiguier, E., Girgin, S., and Zurita-Milla, R.: CGC: an open-source Python module for geospatial data clustering, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3855, https://doi.org/10.5194/egusphere-egu22-3855, 2022.

EGU22-3940 | Presentations | ESSI1.2

The Analysis of the Aftershock Sequence of the Recent Mainshock in Arkalochori, Crete Island Greece 

Alexandra Moshou, Antonios Konstantaras, and Panagiotis Argyrakis

Forecasting the evolution of natural hazards is a critical problem in natural sciences. Earthquake forecasting is one such example and is a difficult task due to the complexity of the occurrence of earthquakes. Until today, earthquake prediction is based on the time before the occurrence of the main earthquake and is based mainly on empirical methods and specifically on the seismic history of a given area. Τhe analysis and processing of its seismicity play a critical role in modern statistical seismology. In this work, a first attempt is made to study and draw safe conclusions regarding the prediction for the seismic sequence, specifically using appropriate statistical methods like Bayesian predictive, taking into account the uncertainties of the model parameters. The above theory was applied in the recent seismic sequence in the area of ​​Arkalochori in Crete Island, Greece (2021, Mw 6.0). Τhe rich seismic sequence that took place immediately after the main 5.6R earthquake with a total of events for the next three months, approximately 4,000 events of magnitude ML > 1 allowed calculating the probability of having the most significant expected earthquake during a given time as well as calculating the probability that the most significant aftershock is expected to be above a certain magnitude after a major earthquake.

References:

  • Ganas, A., Fassoulas, C., Moshou, A., Bozionelos, G., Papathanassiou, G., Tsimi, C., & Valkaniotis, S. (2017). Geological and seismological evidence for NW-SE crustal extension at the southern margin of Heraklion basin, Crete. Bulletin of the Geological Society of Greece, 51, 52-75. doi: https://doi.org/10.12681/bgsg.15004
  • Konstantaras, A.J. (2016). Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters. Earth Science Informatics. 9 (1), 95-100.
  • Konstantaras, A. (2020). Deep learning and parallel processing spatio-temporal clustering unveil new Ionian distinct seismic zone. Informatics. 7 (4), 39.
  • Moshou, A., Papadimitriou, E., Drakatos, G., Evangelidis, C., Karakostas, V., Vallianatos, F., & Makropoulos, K. (2014, May). Focal Mechanisms at the convergent plate boundary in Southern Aegean, Greece. In EGU General Assembly Conference Abstracts (p. 12185)
  • Moshou, A., Argyrakis, P., Konstantaras, A., Daverona, A.C. & Sagias, N.C. (2021). Characteristics of Recent Aftershocks Sequences (2014, 2015, 2018) Derived from New Seismological and Geodetic Data on the Ionian Islands, Greece. 6 (2), 8.
  • C.B., Nolet. G., 1997. P and S velocity structure of the Hellenic area obtained by robust nonlinear inversion of travel times. J. Geophys. Res. 102 (8). 349–367

How to cite: Moshou, A., Konstantaras, A., and Argyrakis, P.: The Analysis of the Aftershock Sequence of the Recent Mainshock in Arkalochori, Crete Island Greece, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3940, https://doi.org/10.5194/egusphere-egu22-3940, 2022.

EGU22-5487 | Presentations | ESSI1.2

3D Mapping of Active Underground Faults Enabled by Heterogeneous Parallel Processing Spatio-Temporal Proximity and Clustering Algorithms 

Alexandra Moshou, Antonios Konstantaras, Nikitas Menounos, and Panagiotis Argyrakis

Underground faults cast energy storage elements of the accumulated strain energy in border areas of active tectonic plates. Particularly in the southern front of the Hellenic seismic arc, a steady yearly flow in the accumulation of strain energy is being due to the constant rate of motion at which the African plate sub-sinks beneath the Eurasian plate. Partial release of the stored energy from a particular underground fold manifests in the form of an earthquake once reaching the surface of the Earth’s crust. The information obtained for each recorded earthquake includes among others the surface location and the estimated hypocentre depth. Considering that hundreds of thousands earthquakes have been recorded in that particular area, the accumulated hypocentre depths provide a most valuable source of information regarding the in-depth extent of the seismically active parts of the underground faults. This research work applies expert knowledge spatio-temporal clustering in previously reported distinct seismic cluster zones, aiming to associate each individual main earthquake along with its recoded foreshocks and aftershocks to a single underground fault in existing two-dimensional mappings. This process is being enabled by heterogeneous parallel processing algorithms encompassing both proximity and agglomerative density-based clustering algorithms upon main seismic events only to mapped. Once a main earthquake is being associated to a particular known underground fault, then the underground fault’s point with maximum proximity to the earthquake’s hypocentre appends its location parameters, additionally incorporating the dimension of depth to the initial planar dimensions of latitude and longitude. The ranges of depth variations provide a notable indication of the in-depth extent of the seismically active part(s) of underground faults enabling their 3D model mapping.

Indexing terms: spatio-temporal proximity and clustering algorithms, heterogeneous parallel processing, Cuda, 3D underground faults’ mapping

References

Axaridou A., I. Chrysakis, C. Georgis, M. Theodoridou, M. Doerr, A. Konstantaras, and E. Maravelakis. 3D-SYSTEK: Recording and exploiting the production workflow of 3D-models in cultural heritage. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 51-56, 2014.

Konstantaras A. Deep learning and parallel processing spatio-temporal clustering unveil new Ionian distinct seismic zone. Informatics. 7 (4), 39, 2020.

Konstantaras A.J. Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters. Earth Science Informatics. 9 (1), 95-100, 2016.

Konstantaras A.J., E. Katsifarakis, E. Maravelakis, E. Skounakis, E. Kokkinos and E. Karapidakis. Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic Seismic Arc. Earth Science Research 1 (2), 1-10, 2012.

Konstantaras A., F. Valianatos, M.R. Varley, J.P. Makris. Soft-Computing modelling of seismicity in the southern Hellenic Arc. IEEE Geoscience and Remote Sensing Letters, 5 (3), 323-327, 2008.

Konstantaras A., M.R. Varley, F. Valianatos, G. Collins and P. Holifield. Recognition of electric earthquake precursors using neuro-fuzzy methods: methodology and simulation results. Proc. IASTED Int. Conf. Signal Processing, Pattern Recognition and Applications (SPPRA 2002), Crete, Greece, 303-308, 2002.

Maravelakis E., A. Konstantaras, K. Kabassi, I. Chrysakis, C. Georgis and A. Axaridou. 3DSYSTEK web-based point cloud viewer. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 262-266, 2014.

How to cite: Moshou, A., Konstantaras, A., Menounos, N., and Argyrakis, P.: 3D Mapping of Active Underground Faults Enabled by Heterogeneous Parallel Processing Spatio-Temporal Proximity and Clustering Algorithms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5487, https://doi.org/10.5194/egusphere-egu22-5487, 2022.

As the interpretability and explainability of artificial intelligence decisions has been gaining attention, novel approaches are needed to develop diagnostic tools that account for the unique challenges of geospatial and environmental data, including spatial dependence and high dimensionality, which are addressed in this contribution. Building upon the geostatistical tradition of distance-based measures, spatial prediction error profiles (SPEPs) and spatial variable importance proles (SVIPs) are introduced as novel model-agnostic assessment and interpretation tools that explore the behavior of models at different prediction horizons. Moreover, to address the challenges of interpreting the joint effects of strongly correlated or high-dimensional features, often found in environmental modeling and remote sensing, a model-agnostic approach is developed that distills aggregated relationships from complex models. The utility of these techniques is demonstrated in two case studies representing a regionalization task in an environmental-science context, and a classification task from multitemporal remote sensing of land use. In these case studies, SPEPs and SVIPs successfully highlight differences and surprising similarities of geostatistical methods, linear models, random forest, and hybrid algorithms. With 64 correlated features in the remote-sensing case study, the transformation-based interpretation approach successfully summarizes high-dimensional relationships in a small number of diagrams.

The novel diagnostic tools enrich the toolkit of geospatial data science, and may improve machine-learning model interpretation, selection, and design.

How to cite: Brenning, A.: Novel approaches to model assessment and interpretation in geospatial machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6955, https://doi.org/10.5194/egusphere-egu22-6955, 2022.

EGU22-7529 | Presentations | ESSI1.2

Global maps from local data: Towards globally applicable spatial prediction models 

Marvin Ludwig, Álvaro Moreno Martínez, Norbert Hölzel, Edzer Pebesma, and Hanna Meyer

Global-scale maps are an important tool to provide ecologically relevant environmental variables to researchers and decision makers. Usually, these maps are created by training a machine learning algorithm on field-sampled reference data and the application of the resulting model to associated information from satellite imagery or globally available environmental predictors. However, field samples are often sparse and clustered in geographic space, representing only parts of the global environment. Machine learning models are therefore prone to overfit to the specific environments they are trained on - especially when a large set of predictor variables is utilized. Consequently, model validations have to include an analysis of the models transferability to regions where no training samples are available e.g. by computing the Area of Applicability (AOA, Meyer and Pebesma 2021) of the prediction models.

Here we reproduce three recently published global environmental maps (soil nematode abundances, potential tree cover and specific leaf area) and assess their AOA. We then present a workflow to increase the AOA (i.e. transferability) of the machine learning models. The workflow utilizes spatial variable selection in order to train generalized models which include only predictors that are most suitable for predictions in regions without training samples. We compared the results to the three original studies in terms of prediction performance and AOA. Results indicate that reducing predictors to those relevant for spatial prediction, leads to a significant increase of model transferability without significant decrease of the prediction quality in areas with high sampling density.

Meyer, H. & Pebesma, E. Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution 2041–210X.13650 (2021) doi:10.1111/2041-210X.13650.

How to cite: Ludwig, M., Moreno Martínez, Á., Hölzel, N., Pebesma, E., and Meyer, H.: Global maps from local data: Towards globally applicable spatial prediction models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7529, https://doi.org/10.5194/egusphere-egu22-7529, 2022.

EGU22-8323 | Presentations | ESSI1.2

Multi-attribute geolocation inference from tweets 

Umair Qazi, Ferda Ofli, and Muhammad Imran

Geotagged social media messages, especially from Twitter, can have a substantial impact on decision-making processes during natural hazards and disasters. For example, such geolocation information can be used to enhance natural hazard detection systems where real-time geolocated tweets can help identify the critical human-centric hotspots of an emergency where urgent help is required.

Our work can extract geolocation information from tweets by making use of five meta-data attributes provided by Twitter. Three of these are free-form text, namely tweet text, user profile description, and user location. The other two attributes are GPS coordinates and place tags.

Tweet text may or may not have relevant information to extract geolocation. In the cases where location information is available within tweet text, we follow toponym extraction from the text using Named Entity Recognition and Classification (NERC). The extracted toponyms are then used to obtain geolocation information using Nominatim (which is open-source geocoding software that powers OpenStreetMap) at various levels such as country, state, county, city.

Similar process is followed for user profile description where only location toponyms identified by NERC are stored and then geocoded using Nominatim at various levels.

User location field, which is also a free form text, can have mentions of multiple locations such as USA, UK. To extract location from this field a heuristic algorithm is adopted based on a ranking mechanism that allows it to be resolved to a single point of location which can be then mapped at various levels such as country, state, county, city.

GPS coordinates provide the exact longitude and latitude of the device's location. We perform reverse geocoding to obtain additional location details, e.g., street, city, or country the GPS coordinates belong to. For this purpose, we use Nominatim’s reverse API endpoint to extract city, county, state, and country information.

Place tag provides a bounding box or an exact longitude and latitude or name information of location-tagged by the user. The place field data contains several location attributes. We extract location information from different location attributes within the place using different algorithms. Nominatim’s search API endpoint to extract city, county, state, and country names from the Nominatim response if available.

Our geo-inference pipeline is designed to be used as a plug-in component. The system spans an elasticsearch cluster with six nodes for efficient and fast querying and insertion of records. It has already been tested on geolocating more than two billion covid-related tweets. The system is able to handle high insertion and query load. We have implemented smart caching mechanisms to avoid repetitive Nominatim calls since it is an expensive operation. The caches are available both for free-form text (Nominatim’s search API) and exact latitude and longitude (Nominatim’s reverse API). These caches help reduce the load on Nominatim and give quick access to the most commonly queried terms.

With this effort, we hope to provide the necessary means for researchers and practitioners who intend to explore social media data for geo-applications.

How to cite: Qazi, U., Ofli, F., and Imran, M.: Multi-attribute geolocation inference from tweets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8323, https://doi.org/10.5194/egusphere-egu22-8323, 2022.

EGU22-8648 | Presentations | ESSI1.2

A graph-based fractality index to characterize complexity of urban form using deep graph convolutional neural networks 

Lei Ma, Stefan Seipel, S. Anders Brandt, and Ding Ma

Inspection of the complexity of urban morphology facilitates understanding of human behaviors in urban space, leading to better conditions for the sustainable design of future cities. Fractal indicators, such as fractal dimension, ht-index, and cumulative rate of growth (CRG) index, have been proposed as measures of such complexity. However, these major fractal indicators are statistical rather than spatial, which leads to failure of characterizing the spatial complexity of urban morphology, such as building footprints. To overcome this problem, in this paper a graph-based fractality index (GFI), based on a hybrid of fractal theories and deep learning techniques, is proposed. To quantify the spatial complexity, several fractal variants were synthesized to train a deep graph convolutional neural network. Building footprints of London were used to test the method and the results show that the proposed framework performs better than traditional indices. Moreover, the possibility of bridging fractal theories and deep learning techniques on complexity issues opens up new possibilities of data-driven GIScience.

How to cite: Ma, L., Seipel, S., Brandt, S. A., and Ma, D.: A graph-based fractality index to characterize complexity of urban form using deep graph convolutional neural networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8648, https://doi.org/10.5194/egusphere-egu22-8648, 2022.

EGU22-8891 | Presentations | ESSI1.2

Infilling Spatial Precipitation Recordings with a Memory-Assisted CNN 

Johannes Meuer, Laurens Bouwer, Étienne Plésiat, Roman Lehmann, Markus Hoffmann, Thomas Ludwig, Wolfgang Karl, and Christopher Kadow

Missing climate data is a widespread problem in climate science and leads to uncertainty of prediction models that rely on these data resources. So far, existing approaches for infilling missing precipitation data are mostly numerical or statistical techniques that require considerable computational resources and are not suitable for large regions with missing data. Most recently, there have been several approaches to infill missing climate data with machine learning methods such as convolutional neural networks or generative adversarial networks. They have proven to perform well on infilling missing temperature or satellite data. However, these techniques consider only spatial variability in the data whereas precipitation data is much more variable in both space and time. Rainfall extremes with high amplitudes play an important role. We propose a convolutional inpainting network that additionally considers a memory module. One approach investigates the temporal variability in the missing data regions using a long-short term memory. An attention-based module has also been added to the technology to consider further atmospheric variables provided by reanalysis data. The model was trained and evaluated on the RADOLAN data set  which is based on radar precipitation recordings and weather station measurements. With the method we are able to complete gaps in this high quality, highly resolved spatial precipitation data set over Germany. In conclusion, we compare our approach to statistical techniques for infilling precipitation data as well as other state-of-the-art machine learning techniques. This well-combined technology of computer and atmospheric research components will be presented as a dedicated climate service component and data set.

How to cite: Meuer, J., Bouwer, L., Plésiat, É., Lehmann, R., Hoffmann, M., Ludwig, T., Karl, W., and Kadow, C.: Infilling Spatial Precipitation Recordings with a Memory-Assisted CNN, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8891, https://doi.org/10.5194/egusphere-egu22-8891, 2022.

The real world does not live on a regular grid. The observations with the best spatiotemporal resolution are generally irregularly distributed over space and time, even though as data they are generally stored in arrays in files. Storing the diverse data types of Earth science, including grid, swath, and point based spatiotemporal distributions, in separate files leads to computer-native array layouts on disk or working memory having little or no connection with the spatiotemporal layout of the observations themselves. For integrative analysis, data must be co-aligned both spatiotemporally and in computer memory, a process called data harmonization. For data harmonization to be scalable in both diversity and volume, data movement must be minimized. The SpatioTemporal Adaptive Resolution Encoding (STARE) is a hierarchical, recursively subdivided indexing scheme for harmonizing diverse data at scale. 

STARE indices are integers embedded with spatiotemporal attributes key to efficient spatiotemporal analysis. As a more computationally efficient alternative to conventional floating-point spatiotemporal references, STARE indices apply uniformly to all spatiotemporal data regardless of their geometric layouts. Through this unified reference, STARE harmonizes diverse data in their native states to enable integrative analysis without requiring homogenization of the data by interpolating them to a common grid first.

The current implementation of STARE supports solid angle indexing, i.e. longitude-latitude, and time. To fully support Earth science applications, STARE must be extended to indexing the radial dimension for a full 4D spatiotemporal indexing. As STARE’s scalability is based on having a universal encoding scheme mapping spatiotemporal volumes to integers, the variety of existing approaches to encoding the radial dimension arising in Earth science raises complex design issues for applying STARE’s principles. For example, the radial dimension can be usefully expressed via length (altitude) or pressure coordinates. Both length and pressure raise the question as to what reference surface should be used. As STARE’s goal is to harmonize different kinds of data, we must determine whether it is better to have separate radial scale encodings for length and pressure, or should we have a single radial encoding, for which we provide tools for translating between various (radial) coordinate systems. The questions become more complex when we consider the wide range of Earth science data and applications, including, for example, model simulation output, lidar point clouds, spacecraft swath data, aircraft in-situ measurements, vertical or oblique parameter retrievals, and earthquake-induced movement detection. 

In this work, we will review STARE’s unifying principle and the unique nature of the radial dimension. We will discuss the challenges of enabling scalable Earth science data harmonization in both diversity and volume, particularly in the context of detection, cataloging, and statistical study of fully 4D hierarchical phenomena events such as extratropical cyclones. With the twin challenges of exascale computing and increasing model simulation resolutions opening new views into physical processes, scalable methods for bringing best-resolution observations and simulations together, like STARE, are becoming increasingly important.

How to cite: Rilee, M. and Kuo, K.-S.: Design Considerations for the 3rd Spatial Dimension of the Spatiotemporal Adaptive Resolution Encoding (STARE), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10799, https://doi.org/10.5194/egusphere-egu22-10799, 2022.

EGU22-10823 | Presentations | ESSI1.2

Scalable Feature Extraction and Tracking (SCAFET): A general framework for feature extraction from large climate datasets 

Arjun Nellikkattil, June-Yi Lee, and Axel Timmermann

The study describes a generalized framework to extract and track features from large climate datasets. Unlike other feature extraction algorithms, Scalable Feature Extraction and Tracking (SCAFET) is independent of any physical thresholds making it more suitable for comparing features from different datasets. Features of interest are extracted by segmenting the data on the basis of a scale-independent bounded variable called shape index (Si). Si gives a quantitative measurement of the local shape of the field with respect to its surroundings. To illustrate the capabilities of the method, we have employed it in the extraction of different types of features. Cyclones and atmospheric rivers are extracted from the ERA5 reanalysis dataset to show how the algorithm extracts points as well as surfaces from climate datasets. Extraction of sea surface temperature fronts depicts how SCAFET handles unstructured grids. Lastly, the 3D structures of jetstreams is extracted to demonstrate that the algorithm can extract 3D features too. The detection algorithm is implemented as a jupyter notebook[https://colab.research.google.com/drive/1D0rWNQZrIfLEmeUYshzqyqiR7QNS0Hm-?usp=sharing] accessible to anyone to test out the algorithm.

How to cite: Nellikkattil, A., Lee, J.-Y., and Timmermann, A.: Scalable Feature Extraction and Tracking (SCAFET): A general framework for feature extraction from large climate datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10823, https://doi.org/10.5194/egusphere-egu22-10823, 2022.

With the far-reaching impact of Artificial Intelligence (AI) becoming more acknowledgeable across various dimensions and industries, the Geomatics scientific community has reasonably turned to automated (in some cases, autonomous) solutions while looking to efficiently extract and communicate patterns in high-dimensional geographic data. This, in turn, has led to a range of AI platforms providing grounds for cutting-edge technologies such as data mining, image processing and predictive/prescriptive modelling. Meanwhile, coastal management bodies around the world, are striving to harness the power of AI and Machine Learning (ML) applications to act upon the wealth of coastal information, emanating from disparate data sources (e.g., geodesy, hydrography, bathymetry, mapping, remote sensing, and photogrammetry). The cross-disciplinarity of stakeholder engagement calls for thorough risk assessment and coastal defence strategies (e.g., erosion/flooding control), consistent with the emerging need for participatory and integrated policy analyses. This paper addresses the issue of seeking techno-centric solutions in human-understandable language, for holistic knowledge engineering (from acquisition to dissemination) in a spatiotemporal context; namely, the benefits of setting up a unified Visual Analytics (VA) system, which allows for real-time monitoring and Online Analytical Processing (OLAP) operations on-demand, via role-based access. Working from an all-encompassing data model could form seamlessly collaborative workspaces that support multiple programming languages (packaging ML libraries designed to interoperate) and enable heterogeneous user communities to visualize Big Data at different granularities, as well as perform task-specific queries with little, or no, programming skill. The proposed solution is an integrated coastal management dashboard, built natively for the cloud (aka leveraging batch and stream processing), to dynamically host live Key Performance Indicators (KPIs) whilst ensuring wide adoption and sustainable operation. The results reflect the value of effectively collecting and consolidating coastal (meta-)data into open repositories, to jointly produce actionable insight in an efficient manner.

How to cite: Anthis, Z.: Reading Between the (Shore)Lines: Real-Time Analytical Processing to Monitor Coastal Erosion, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13102, https://doi.org/10.5194/egusphere-egu22-13102, 2022.

Greku R.Kh., Greku D.R.

Institute of Geological Sciences, Ukraine

SATMAR Laboratory, DDS Capital Investments, Australia

 

The geoid gravity potential inversion to dense anomalies and their comparison with the seismic tomography models

 

The results of using the gravitational tomography method is based on the use of algorithms for inverting the values ​​of the gravitational potential (geoid) for calculating the Earth's density anomalies in the entire range of depths up to 5300 km [H. Moritz. The Figure of the Earth's Interior, Wichmann / Karlsruhe, 1990]. The initial data are the anomalies of the geoid heights according to the EGM2008 model in the expansion in spherical functions to harmonics n, m = 2190. The spatial resolution of the data on the surface is 10 km. The depths of the disturbing masses are determined taking into account the harmonic number. The result is maps of density distribution at specified depths, vertical sections and 3D models.

Examples of the distribution of density anomalies for certain regions of Ukraine, Europe and Antarctica are given. Discrepancies with known works on seismotomography are mainly due to different physical properties of the studied medium: density and acoustic properties of rocks.

Density anomaly results are reported as the percent deviation from the Earth's PREM density model for a given location and depth. The entire range of density anomalies in the form of deviations from the PREM model does not exceed 12%. Complete coincidence of the results is observed, for example, at great depths of 2800 km throughout the Earth. The section through the continent of Antarctica with a complex relief and structure to a depth of 400 km also shows similar images from seismic and gravity tomography. The gravitomographic model of the tectonically active region of Vrancea confirms the delamination nature of the formation of the disturbing mass and the occurrence of earthquakes in Europe.

The original call to the present topic of the GD7.5 session (Prof. Saskia Goes) rightly notes the important role of rheological variability in the mantle layers on the deformation of the earth's crust and surface, which can cause catastrophic destruction of large-block structures. In this sense, the intensity of the inner layers according to the data of structural inhomogeneities becomes more and more urgent.

How to cite: Greku, R. and Greku, D.: The geoid gravity potential inversion to dense anomalies and their comparison with the seismic tomography models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3316, https://doi.org/10.5194/egusphere-egu22-3316, 2022.

Increased observation frequencies are current trends in optical remote sensing. However, there are still challenges at the night side when sunlight is not available. Due to their powerful capabilities in low-light sensing, nightlight satellite sensors have been deployed to capture nightscapes of the Earth from space, observing anthropomorphic and natural activities at night. At present, most nightlight remote sensing applications have mostly focused on artificial lights, particularly within cities or self-luminous entities such as fisheries, oil, shale gas, offshore rigs, and other self-luminous bodies. Little attention has been paid to examining the potential of nightlight remote sensing for mapping land surfaces in low-light suburban areas using satellite remote sensing technology. Observations taken under moonlight are often discarded or corrected to reduce the lunar effects. Some researchers have discussed the possibility of moonlight as a useful illuminating source at night for the detection of nocturnal features on Earth, but no quantitative analysis has been reported so far. This study aims to systematically evaluate the potential of moonlight remote sensing with the whole month of mono-spectral Visible Infrared Imaging Radiometer Suite/Day-Night-Band (VIIRS/DNB) and multi-spectral Unmanned Aerial Vehicle (UAV) nighttime images. The present study aims to:1) to study the potential of moonlight remote sensing for mapping land surface in low-light suburban areas; 2) to investigate the Earth observation capability of moonlight data under different lunar phases;3) to make two daily uniform nightlight datasets(moonlight included and removed) for various night scenes researches, like weather diurnal forecast, circadian rhythms in plants and so on; 4) to discuss the requirements for the next-generation nightlight remote sensing satellite sensors.

How to cite: Liu, D. and Zhang, Q.: The Potential of Moonlight Remote Sensing: A Systematic Assessment with Multi-Source and Multi-Moon phase Nightlight Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3380, https://doi.org/10.5194/egusphere-egu22-3380, 2022.

EGU22-4300 | Presentations | ESSI1.4

Synergetic use of Sentinel-1 and Sentinel-2 data for large-scale Land Use/Land Cover Mapping 

Melanie Brandmeier, Maximilian Hell, Eya Cherif, and Andreas Nüchter

One of the largest threats to the vast ecosystem of the Brazilian Amazon Forest is deforestation and forest degradation caused by human activity. The possibility to continuously monitor these degradation events has recently become more feasible through the use of freely available satellite remote sensing data and machine learning algorithms suited for big datasets.

A fundamental challenge of such large-scale monitoring tasks is the automatic generation of reliable and correct land use and land cover (LULC) maps. This is achieved by the development of robust deep learning models that generalize well on new data. However, these approaches require large amounts of labeled training data. We use the latest results of the MapBiomas project as the ‘ground-truth’ for developing new algorithms. In this project, Souza et al. [1] used yearly composites of USGS Landsat imagery to classify the LULC for the whole of Brazil. The latest iteration of their work became available for the years 1985–2020 as Collection 6 (https://mapbiomas.org). However, this reference data cannot be considered real ground truth, as it is itself generated from machine learning models and therefore requires novel approaches suited to overcome such problems of weakly supervised learning.

As tropical regions are often covered by clouds, radar data is better suited for continuous mapping than optical imagery, due to its cloud-penetrating capabilities. In a preliminary study, we combined data from ESA’s Sentinel-1 (radar) and Sentinel-2 (multispectral) missions for developing algorithms suited to act on multi-modal and -temporal data to obtain accurate LULC maps. The best performing proposed deep learning network, DeepForestM2, employed a seven-month radar time series combined with a single optical scene. This model configuration reached an overall accuracy of 75.0% on independent test data. A state-of-the-art (SotA) DeepLab model, trained on the very same data, reached an overall accuracy of 69.9%.

Currently, we are further developing this approach of fusing multi-modal data with a temporal aspect to improve on LULC classification. Larger amounts of more recent data, both Sentinel-1 and Sentinel-2 from 2020 are included in training experiments. Additional deep learning networks and approaches to deal with weakly supervised [2] learning are developed and tested on the data. The need for the weakly supervised methods arises from the reference data, which is both inaccurate and inexact, i.e., has a coarser spatial resolution than the training data. We aim to improve the classification results qualitatively, as well as quantitatively compared to SotA methods, especially with respect to generalizing well on new datasets. The resulting deep learning methods, together with the trained weights, will also be made accessible through a geoprocessing tool in Esri’s ArcGIS Pro for users without coding background.

  • Carlos M. Souza et al. “Reconstructing Three Decades of Land Use and Land Cover Changes in Brazilian Biomes with Landsat Archive and Earth Engine”. en. In: Remote Sensing 17 (Jan. 2020). Number: 17 Publisher: Multidisciplinary Digital Publishing Institute, p. 2735. DOI: 10.3390/ rs12172735.
  • Zhi-Hua Zhou. “A brief introduction to weakly supervised learning”. In: National Science Review 5.1 (Jan. 2018), pp. 44–53. ISSN: 2095-5138. DOI: 10.1093/nsr/nwx106.

How to cite: Brandmeier, M., Hell, M., Cherif, E., and Nüchter, A.: Synergetic use of Sentinel-1 and Sentinel-2 data for large-scale Land Use/Land Cover Mapping, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4300, https://doi.org/10.5194/egusphere-egu22-4300, 2022.

EGU22-4678 | Presentations | ESSI1.4

Lithology Mapping with Satellite, Fieldwork-based Spectral data, and Machine Learning: the case study of Beiras Group (Central Portugal) 

João Pereira, Alcides J.S.C. Pereira, Artur Gil, and Vasco M. Mantas

The lack of cartography increases the problematic of poor knowledge of geological resources and land management in regions that could benefit greatly from this information. Remote sensing has been an invaluable mean of obtaining data to perform geological mapping objectively and with high scientific accuracy. In Portugal, there is a large gap of cartographic information at 1:50 000 scale throughout the territory, so this work intends to complement this problem through a set of techniques and methodologies applied to a study of a region of Grupo das Beiras.

Spectral databases serve as an initial tool for any methodology involving spectral analysis, namely for the development of cartography methods and quick characterization of rock samples.

To address these issues, a multispectral analysis of january and july 2015th scenes with low cloud cover and atmospheric corrections (level 2) was obtained from Landsat 8 (LS8). Certain statistical tests such as ANOVA and Tukey's were applied to both images to clearly know whether significant differences exist between lithologies.

For the hyperspectral analysis, two sampling campaigns were carried out with the collection of rock samples of metasediments and granites and soil. The analysis was performed in fresh samples, crushed samples (2 mm - 500 μm; 500 μm - 125μm; <125 μm) and soil samples demonstrating a significantly different spectral behavior among various particle sizes in the hyperspectral signatures between fresh and crushed samples. X-ray fluorescence (FRX) was used to obtain geochemical data of major elements to validate the spectral results obtained. As a result, there were identified correspondences between the obtained hyperspectral data and the databases as well in the literature meaning that the spectral signatures of this research are consistent with the studied samples.

The creation of machine learning models is an emerging tool for cartography in which LS8 reflectance data was used for this elaboration. In this work and for this context the models proved to be useful and successful for the image classification from algorithms assigned for this function.

How to cite: Pereira, J., Pereira, A. J. S. C., Gil, A., and Mantas, V. M.: Lithology Mapping with Satellite, Fieldwork-based Spectral data, and Machine Learning: the case study of Beiras Group (Central Portugal), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4678, https://doi.org/10.5194/egusphere-egu22-4678, 2022.

EGU22-5333 | Presentations | ESSI1.4

Remote sensing – based analysis of the islands dynamics in the Lower Danube River 

Marina Virghileanu and Gabriela Ioana-Toroimac

River islands are important components of the river morpho-dynamics, which can provide essential information on fluvial processes, as well as on sediment and flow regimes. In the same time, river islands play an essential role from the political, environmental and socio-cultural points of view. Thus, understanding the temporal dynamics of the river islands is a required task for channel navigation safety, port functionality, agricultural production and biodiversity. The aim of this study is to analyse the spatial and temporal changes on the river islands during the last 40 years, based on satellite remotely sensed images. The study focuses on the Lower Danube River, downstream the Iron Gates dams altering the flow and sediment load, which also suffers from dredging for navigation. The islands of the Lower Danube River generate major impacts on riparian states relationship, interfere with the ports activity and EU investments (as it is the case of Rast port in Romania), or are the subject of ecological restoration. Multispectral satellite data, including Landsat and Sentinel-2 images, were used for river islands mapping at different temporal moments, with a medium spatial resolution (up to 15 m on Landsat pansharpened data and 10 m on Sentinel-2). Spectral indices, as NDVI and NDWI, allowed the automatic extraction of island boundaries and land cover information. On these, two processes were carried out: 1) the characterization of the river islands morphology, and 2) the quantification of the spatial and temporal changes over time. The resulted data are connected with in-situ measurements on flow regime and sediment supply, as well as with flood events and human activities in order to identify the potential drivers of change. The results demonstrate a strong correlation between river islands dynamics and flood events in the Lower Danube River, as the major flood event from 2006 significantly modified the islands size and shape. This research can allow the identification of the evolutionary model of the Danube River.

 

This research work was conducted as part of the project PCE 164/2021 “State, Communities and Nature of the Lower Danube Islands: An Environmental History (1830-2020)”, financed by the UEFISCDI.

How to cite: Virghileanu, M. and Ioana-Toroimac, G.: Remote sensing – based analysis of the islands dynamics in the Lower Danube River, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5333, https://doi.org/10.5194/egusphere-egu22-5333, 2022.

EGU22-7726 | Presentations | ESSI1.4

Investigating the links between primary metabolites of medicinal species with leaf hyperspectral reflectance 

Ayushi Gupta, Prashant K Srivastava, and Karuna Shanker

Recent studies have shown that the turnover in tree species composition across edaphic and elevational gradients is strongly correlated with functional traits. However, our understanding of functional traits has been limited by the lack of detailed studies of foliar chemistry across habitats and the logistical & economic challenges associated with the analysis of plant functional traits at large geographical scales. Advances in remote sensing and spectroscopic approaches that measure spectrally detailed light reflectance and transmittance of plant foliage provides accurate predictions of several functional chemical traits. In this study, Pyracantha crenulata (D. Don) M. Roemer has been used, which is an evergreen thorny shrub species found in open slopes between 1,000 and 2,400 m above mean sea level. P. crenulata is used in the treatment of hepatic, cardiac, stomach, and skin disease. In this study the P. crenulata leaves samples spectra were recorded using an ASD spectroradiometer and following primary metabolites such as chlorophyll, anthocyanin, phenolic, and sterol were analyzed. The spectroradiometer data were preprocessed using filter and then reduced to a few sensitive bands by applying feature selection to the hyperspectral data. The band values were directly correlated with the measured values. The analysis indicates a significant correlation between P. crenulata primary metabolite in the Visible and Infrared region (VISIR). This result suggests that molecules that have important functional attributes could be identified by VISIR spectroscopy, which would save a lot of time and expense as compared to wet laboratory analysis.

How to cite: Gupta, A., Srivastava, P. K., and Shanker, K.: Investigating the links between primary metabolites of medicinal species with leaf hyperspectral reflectance, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7726, https://doi.org/10.5194/egusphere-egu22-7726, 2022.

EGU22-7859 | Presentations | ESSI1.4

Predictive performance of deep-learning-enhanced remote-sensing data for ecological variables of tidal flats over time 

Logambal Madhuanand, Katja Phillippart, Wiebe Nijland, Jiong Wang, Steven M. De Jong, Allert I. Bijleveld, and Elisabeth A. Addink

Tidal flat systems with a diverse benthic community (e.g., bivalves, polychaetes and crustaceans) is important in the food chain for migratory birds and fish. The geographical distribution of macrozoobenthos depends on physical factors, among which sediment characteristics are key aspects. Although high-resolution and high-frequency mapping of benthic indices (i.e., sediment composition and benthic fauna) of these coastal systems are essential to coastal management plans, it is challenging to gather such information on tidal flats through in-situ measurements. The Synoptic Intertidal Benthic Survey (SIBES) database provides this field information for a 500m grid annual for the Dutch Wadden Sea, but continuous coverage and seasonal dynamics are still lacking. Remote sensing may be the only feasible monitoring method to fill in this gap, but it is hampered by the lack of spectral contrast and variation in this environment. In this study, we used a deep-learning model to enhance the information extraction from remote-sensing images for the prediction of environmental and ecological variables of the tidal flats of the Dutch Wadden Sea. A Variational Auto Encoder (VAE) deep-learning model was trained with Sentinel-2 satellite images with four bands (blue, green, red and near-infrared) over three years (2018, 2019 and 2020) of the tidal flats of the Dutch Wadden Sea. The model was trained to derive important characteristics of the tidal flats as image features by reproducing the input image. These features contain representative information from the four input bands, like spatial texture and band ratios, to complement the low-contrast spectral signatures. The VAE features, the spectral bands and the field-collected samples together were used to train a random forest model to predict the sediment characteristics: median grain size and silt content, and macrozoobenthic biomass and species richness. The prediction was done on the tidal flats of Pinkegat and Zoutkamperlaag of the Dutch Wadden sea. The encoded features consistently increased the accuracy of the predictive model. Compared to a model trained with just the spectral bands, the use of encoded features improved the prediction (coefficient of determination, R2) by 10-15% points for 2018, 2019 and 2020. Our approach improves the available techniques for mapping and monitoring of sediment and macrozoobenthic properties of tidal flat systems and thereby contribute towards their sustainable management.

How to cite: Madhuanand, L., Phillippart, K., Nijland, W., Wang, J., De Jong, S. M., Bijleveld, A. I., and Addink, E. A.: Predictive performance of deep-learning-enhanced remote-sensing data for ecological variables of tidal flats over time, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7859, https://doi.org/10.5194/egusphere-egu22-7859, 2022.

The definition of urbanized areas, both regionally and globally, is an important basis for urban development monitoring and management, as well as an important condition for studying social policies, economics, culture and the environment.

Thanks to the development of science and technology, urban expansion is developing rapidly. The method of extracting urbanized areas quickly and accurately has become the focus of research.

In the 1970s, with the beginning of the Defense Meteorological Satellite Program (DMSP), the images of night lights that provide a new method for the extraction of urbanized areas were born.

However, due to the limits of spatial resolution and spectral range, it’s true that there are defects in urbanized area extraction based on OMSP-OLS nightlight images.

In recent years, with the development of remote sensing technology, remote sensing data with a higher resolution emerged, providing an effective and applicable data source for urban planning monitoring.

I suppose that the images of night lights with a higher resolution have greater precision than the old ones in the extraction of urbanized areas.

This work has dedicated the images of night lights (NPP-VIIRS and Luojia1-01) and the images of urbanized areas (FROM-GLC 2017) to construct a logistic regression model to evaluate and compare the accuracy of the two images of night lights in the extraction of urbanized areas.

The case study is Barcelona metropolitan area, Spain. (636 km2, 3.3 million inhabitants).

How to cite: Zheng, Q. and Roca, J.: The extraction of urbanized areas based on the high-resolution night lights images: A case study in Barcelona, Spain , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8019, https://doi.org/10.5194/egusphere-egu22-8019, 2022.

EGU22-9012 | Presentations | ESSI1.4 | Highlight

Mapping the World at 10 m: A Novel Deep-Learning Land Use Land Cover Product and Beyond 

Dawn Wright, Steve Brumby, Sean Breyer, Abigail Fitzgibbon, Dan Pisut, Zoe Statman-Weil, Mark Hannel, Mark Mathis, and Caitlin Kontgis

Land use / land cover (LULC) maps provide critical information to governments, land use planners, and decision-makers about the spatial layout of the environment and how it is changing.  While a variety of LULC products exist, they are often coarse in resolution, not updated regularly, or require manual editing to be useful.  In partnership, Esri, Microsoft Planetary Computer, and Impact Observatory created the world’s first publicly available 10-m LULC map by automating and sharing a deep-learning model that was run on over 450,000 Sentinel-2 scenes.  The resulting map, released freely on Esri’s Living Atlas in June 2021, displays ten classes across the globe: built area, trees, scrub/shrub, cropland, bare ground, flooded vegetation, water, grassland, permanent snow/ice, clouds.  Here, we discuss key findings from the resulting map, including a quantitative analysis of how 10-m resolution allows us to assess small, low density urban areas compared to other LULC products, including the Copernicus CGLS-LC100 100-m resolution global map.  We will also share how we support project-based, on-demand LULC mapping and will present preliminary findings from a new globally consistent 2017-2021 annual LULC dataset across the entire Sentinel-2 archive.

How to cite: Wright, D., Brumby, S., Breyer, S., Fitzgibbon, A., Pisut, D., Statman-Weil, Z., Hannel, M., Mathis, M., and Kontgis, C.: Mapping the World at 10 m: A Novel Deep-Learning Land Use Land Cover Product and Beyond, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9012, https://doi.org/10.5194/egusphere-egu22-9012, 2022.

EGU22-9265 | Presentations | ESSI1.4

Application of unsupervised machine learning techniques for lithological and soil mapping in Ossa-Morena Zone 

Marcelo Silva, Pedro Nogueira, Renato Henriques, and Mário Gonçalves

Unsupervised methods are a good entry point for satellite image classification, requiring little to no input, and outputting an analysis, in the form of a thematic map, that may act as a guide for more user input intensive methods. For this work, we use K-means methods to classify satellite and drone imagery that cover the Ossa-Morena Zone (OMZ), in Portugal, and assess their capacity for lithological and soil mapping. The drone is equipped with a High Precision NDVI Single Sensor and was flown over the ancient mines of Mociços, Mostardeira and Santa Eulália. The OMZ is a tectonostratigraphic domain shared between Portugal and Spain, divided in Sectors, extraordinarily rich and diverse from a lithological, stratigraphical, and structural point-of-view; for this work, we will focus on the Estremoz-Barrancos sector, comprised of a Neoproterozoic to Devonian metasedimentary succession, with a low-grade metamorphism in greenschist facies, and the Santa Eulália Plutonic Complex (SEPC), an elliptic late-Variscan granitic massif that crosscuts the Alter do Chão-Elvas Sector and the Blastomylonitic belt, constituted by two granitic facies, a few small mafic bodies, and some roof pendants that belong to the Alter do Chão-Elvas Sector.

The imagery used correspond to high-level satellite imagery products gathered between 2004 to 2006 (ASTER) and 2017 to 2021 (Landsat 8 and Sentinel-2), and drone imagery captured on May 6th and August 31st, 2021.

The K-means was applied to a variable number of selected bands, including band ratios, and tested for different number of initial clusters and different distance algorithms (Minimum Distance and Spectral Angle Mapping). Afterwards, it was assessed its ability to outlining and classify different geological structures by comparing the results to the geological map of OMZ.

The obtained thematic maps points towards poorer results when using a larger selection of bands - for instance, ASTER bands 1 to 9 (in which bands 1 to 3N were resampled to 30m) -, due to interspersion of different classes, whereas when using band ratio combinations, such as 4/2 and 6/(5+7) (ASTER), the produced map successfully classifies the major geological features present in the region, with increased sharpness between contacts with a higher number of classes.

Results show that K-means, when used under the correct conditions and parameters, has the potential for lithological and soil mapping through image classification, both for satellite and drone imagery.

Future work will focus on the integration of a pre-processing step for band selection using ML techniques, such as through Principal Component Analysis, Minimum Noise Fraction and Random Forest.

The authors acknowledge the funding provided by FCT through the Institute of Earth Sciences (ICT) with the reference UIDB/GEO/04683/2020.

How to cite: Silva, M., Nogueira, P., Henriques, R., and Gonçalves, M.: Application of unsupervised machine learning techniques for lithological and soil mapping in Ossa-Morena Zone, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9265, https://doi.org/10.5194/egusphere-egu22-9265, 2022.

EGU22-10163 | Presentations | ESSI1.4

Utilizing hyperspectral imagery for burnt area mapping in a Greek setting 

Christina Lekka, Spyridon E. Detsikas, George P. Petropoulos, Petros Katsafados, Dimitris Triantakonstantis, and Prashant K. Srivastava

Earth observation (EO) - particularly so from hyperspectral imagers - gains increasing interest in wildfire mapping as it offers a prompt with high accuracy and low-cost delineation of a burnt area.  A key hyperspectral orbital sensor with over 20 years of operational life is Compact High-Resolution Imaging Spectrometer (CHRIS), onboard ESA’s PROBA platform. This mission sensor collects spectral data in the VNIR range (400 - 1050 nm) simultaneously at 5 viewing angles and at different spatial resolutions of 17 m and 34 m which contains 19 and 63 spectral bands respectively. The present study focuses on exploring the use of CHRIS PROBA legacy data combined with machine learning (ML) algorithms in obtaining a burnt area cartography. In this context, a further objective of the study has been to examine the contribution of the multi-angle sensor capabilities to enhance the burn scar detection. As a case study was selected a wildfire occurred during the summer of 2007 in the island of Evvoia, in central Greece for which imagery from the CHRIS PROBA archive shortly after the fire outbreak was available. For the accuracy assessment of the derived burnt area estimate the error matrix statistics were calculated in ENVI. Burnt area estimates from were also further validated against the operational product developed in the framework of ESA’s Global Monitoring for Environmental Security/Service Element. This study’s results evidenced the added value of satellite hyperspectral imagery combined with ML classifiers as a cost-effective and robust approach to evaluate a burnt area extent, particularly so of the multi-angle capability in this case. All in all, the study findings can also provide important insights towards the exploitation of hyperspectral imagery acquired from current missions (e.g. HySIS, PRISMA, CHRIS, DESIS) as well as upcoming ones (e.g. EnMAP, Shalom, HySpiri and Chime).

KEYWORDS: CHRIS-PROBA, hyperspectral, machine learning, burnt area mapping

How to cite: Lekka, C., Detsikas, S. E., Petropoulos, G. P., Katsafados, P., Triantakonstantis, D., and Srivastava, P. K.: Utilizing hyperspectral imagery for burnt area mapping in a Greek setting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10163, https://doi.org/10.5194/egusphere-egu22-10163, 2022.

EGU22-11946 | Presentations | ESSI1.4

A GEOBIA-based approach for mapping Urban Green Spaces using PlanetScope imagery: the case of Athens 

Evangelos Dosiadis, Dimitris Triantakonstantis, Ana-Maria Popa, Spyridon E. Detsikas, Ionut Sandric, George P. Petropoulos, Diana Onose, and Christos Chalkias

The technological developments in geoinformatics in recent decades have allowed the inclusion of geospatial data and analysis techniques in a wide range of scientific disciplines. One such field is associated with the study of urban green spaces (UGS). Those are defined as open, undeveloped areas that provide residents with recreational space, improving the aesthetic and environmental quality of the neighboring areas. Mapping accurately their spatial extent is absolutely essential requirement in urban planning and their preservation and expansion in Metropolitan areas are of high importance to protect the environment and public health.

 
The objective of this study is to explore the use of high spatial resolution satellite imagery from PlanetScope combined with the Geographic Object-Based Image Analysis (GEOBIA) classification approach in mapping UGS in Athens, Greece. For the UGS retrieval, an object-based classification (GEOBIA) method was developed utilizing a multispectral PlanetScope imagery acquired in June 2020. Accuracy assessment was performed with a confusion matrix utilizing a set of randomly selected control points within the image selected from field visits and image photo-interpretation. In addition, the obtained UGS were compared versus independent estimates of the Green Urban Areas from the Urban Atlas global operational product. All the geospatial data analysis was conducted in a GIS environment (ArcGIS Pro).


Results demonstrated the usefulness of GEOBIA technique when combined with very high spatial-resolution satellite imagery from PlanetScope in mapping UGS, as was demonstrated by the high accuracy results that were obtained from the statistical comparisons. With the technological evolution in the Earth Observation datasets acquisition and image processing techniques, mapping UGS has been optimized and facilitated and this study contributes in this direction. 

KEYWORDS: Urban Green Spaces, Athens, PlanetScope, Earth Observation, GEOBIA

How to cite: Dosiadis, E., Triantakonstantis, D., Popa, A.-M., Detsikas, S. E., Sandric, I., Petropoulos, G. P., Onose, D., and Chalkias, C.: A GEOBIA-based approach for mapping Urban Green Spaces using PlanetScope imagery: the case of Athens, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11946, https://doi.org/10.5194/egusphere-egu22-11946, 2022.

EGU22-12092 | Presentations | ESSI1.4

Assessment of 10m Spectral and Broadband Surface Albedo Products from Sentinel-2 and MODIS data 

Jan-Peter Muller, Rui Song, Alistair Francis, Nadine Gobron, Jian Peng, and Nathan Torbick

In Song et al. (2021) [1] a framework for the retrieval of 10 m and 20 m spectral and 20 m broadband surface albedo products was described. This framework consists of four modules: 1) a machine learning based cloud detection method, Spectral ENcoder for SEnsor Independence (SEnSeI) [2]. 2) an advanced atmospheric correction model Sensor Invariant Atmospheric Correction (SIAC) [3]. 3) an endmember-based class extraction method, which enables the retrieval of 10 m/20 m albedos based on a regression between the MODIS Bidirectional Reflectance Distribution Function (BRDF) derived surface albedo and Sentinel-2 surface reflectance resampled to MODIS resolution. 4) a novel method of using the MODIS BRDF prior developed within the QA4ECV programme (http://www.qa4ecv.eu/) to fill in the gaps in a time series caused by cloud obscuration. We describe how ~1100 scenes were processed over 22 Sentinel-2 tiles at the STFC JASMIN facility. These tiles spanned different 4 month time periods for different users with a maximum of 22 dates per tile. These tiles cover Italy, Germany, South Africa, South Sudan, Ukraine and UK for 6 different users. For the Italian site, a detailed analysis was performed of the impact of this hr-albedo on the fAPAR and LAI derived using TIP [5] whilst a second user employed a method described in [6] to compare MODIS and Sentinel-2 and a third user looked at the impact on agricultural yield forecasting. Lessons learnt from these different applications will be described including both the opportunities and areas where further work is required to improve the data quality.

 

We thank ESA for their support through ESA-HR-AlbedoMap: Contract CO 4000130413 and the STFC JASMIN facility and in particular Victoria Bennett for their assistance.

[1] Song, R., Muller, J.-P., Francis, A., A Method of Retrieving 10-m Spectral Surface Albedo Products from Sentinel-2 and MODIS data," 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 2381-2384, doi: 10.1109/IGARSS47720.2021.9554356

[2] Francis, A., Mrziglod, J., Sidiropoulos, P.  and J.-P. Muller, "SEnSeI: A Deep Learning Module for Creating Sensor Independent Cloud Masks," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2021.3128280.

[3] Feng et al. (2019) A Sensor Invariant Atmospheric Correction: Sentinel-2/MSI AND Landsat 8/OLI https://doi.org/10.31223/osf.io/ps957.

[4] Song, R.; Muller, J.-P.; Kharbouche, S.; Yin, F.; Woodgate, W.; Kitchen, M.; Roland, M.; Arriga, N.; Meyer, W.; Koerber, G.; Bonal, D.; Burban, B.; Knohl, A.; Siebicke, L.; Buysse, P.; Loubet, B.; Leonardo, M.; Lerebourg, C.; Gobron, N. Validation of Space-Based Albedo Products from Upscaled Tower-Based Measurements Over Heterogeneous and Homogeneous Landscapes. Remote Sensing 2020, 12, 1–23.doi: 10.3390/rs12050833

[5] Gobron, N.; Marioni, M.; Muller, J.-P.; Song, R.; Francis, A. M.; Feng, Y.; Lewis, P. ESA Sentinel-2 Albedo Case Study: FAPAR and LAI downstream products.; 2021; pp. 1–30. JRC TR (in press)

[6] Peng, J.; Kharbouche, S.; Muller, J.-P.; Danne, O.; Blessing, S.; Giering, R.; Gobron, N.; Ludwig, R.; Mueller, B.; Leng, G.; Lees, T.; Dadson, S. Influences of leaf area index and albedo on estimating energy fluxes with HOLAPS framework. J Hydrol 2020, 580, 124245.

How to cite: Muller, J.-P., Song, R., Francis, A., Gobron, N., Peng, J., and Torbick, N.: Assessment of 10m Spectral and Broadband Surface Albedo Products from Sentinel-2 and MODIS data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12092, https://doi.org/10.5194/egusphere-egu22-12092, 2022.

EGU22-12524 | Presentations | ESSI1.4

Error-reducing Structure-from-Motion derived Digital Elevation Models in data-scarce environments 

Dirk Bakker, Phuoc Phùng, Marc van den Homberg, Sander Veraverbeke, and Anaïs Couasnon

High-accuracy Digital Elevation Models (DEMs) improve the quality of flood risk assessments and many other environmental applications, yet these products are often unavailable in developing countries due to high survey costs. Structure-from-Motion (SfM) photogrammetry combined with Unmanned Aerial Vehicles (UAVs) has been proven as an effective and low-cost technique that enables a wide audience to construct local-scale DEMs. However, the deviation from strict survey designs and guidelines regarding the number and distribution of Ground Control Points (GCPs) can result in linear and doming errors. Two surveys that suffer from these errors have been supplied for error-reduction, but both areas did not have an available high-accuracy DEM or could afford an additional differential Global Navigation Satellite System (dGNSS) ground survey to extract control points from to use in relative georeferencing approach. Little attention has been given to error-reduction using global open-access elevation data, such as: The TerraSAR-X add-on for Digital Elevation Measurements (TanDEM-X) 90; the Ice, Cloud and land Elevation Satellite-2 (ICESat-2); and Hydroweb.

The aim of this study was to improve and validate the two DEMs using control point extraction from the above data and analyze the validation results to determine the impact on error-reduction using regression analyses between the vertical error and distance from nearest control point. The outcomes shows that the ICESat-2 and Hydroweb can support surveys in absence of dGNSS GCPs with similar impact but cannot replace the necessity of dGNSS measurements in georeferencing and validation. These findings suggests that survey guidelines can be maintained with global open-access elevation data, but the effectiveness depends on both the number, distribution and estimated accuracy. Doming errors can be prevented by correct camera lens calibration, which depends on stable lens conditions or a stratified distribution of high-accuracy reference data. The validation of the SfM DEM in data-scarce areas proves difficult due to the lack of an independent validation dataset, but the Copernicus GLO-30 can give a quantification and show the spatial variability of the error. This study highlights the increasing accuracy of global open-access elevation data and shows that these databases allow the user to easily acquire more and independent data for georeferencing and validation, but the RSME is unable to be accurately reduced to sub-meter.

How to cite: Bakker, D., Phùng, P., van den Homberg, M., Veraverbeke, S., and Couasnon, A.: Error-reducing Structure-from-Motion derived Digital Elevation Models in data-scarce environments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12524, https://doi.org/10.5194/egusphere-egu22-12524, 2022.

EGU22-13002 | Presentations | ESSI1.4

ORBiDANSe: Orbital Big Datacube Analytics Service 

Peter Baumann and Dimitar Misev

Datacubes form an accepted cornerstone for analysis (and visualization) ready spatio-temporal data offerings. The increase in user friendliness is achieved by abstracting away from the zillions of files in provider-specific organization. Data¬cube query languages additionally establish actionable datacubes enabling users to ask "any query, any time" with zero coding.

However, typically datacube deployments are aiming at large scale, data center environments accommodating Big Data and massive parallel processing capabilities for achieving decent performance. In this contribution, we conversely report about a downscaling experiment. In the ORBiDANSE project a datacube engine, rasdaman, has been ported to a cubesat, ESA OPS-SAT, and is operational in space. Effectively, the satellite thereby becomes a datacube service offering the standards-based query capabilities of the OGC Web Coverage Processing (WCPS) geo datacube analytics language.
We believe this will pave the way for on-board ad-hoc pro-cessing and filtering on Big EO Data, thereby unleashing them to a larger audience and in substantially shorter time.

In our talk, we report about the concept, technology, and experimental results of ad-hoc on-board datacube query processing.

 

How to cite: Baumann, P. and Misev, D.: ORBiDANSe: Orbital Big Datacube Analytics Service, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13002, https://doi.org/10.5194/egusphere-egu22-13002, 2022.

EGU22-592 | Presentations | NH6.1

Producing a High-Resolution Land Cover Map for Southwest Ethiopia Using Sentinel-2 Images and Google Earth Engine 

Farzad Vahidi Mayamey, Navid Ghajarnia, Saeid Aminjafari, Zahra Kalantari, and Kristoffer Hylander

Accurate knowledge of local land cover and land use and their changes is crucial for many different applications such as natural resources management, environmental studies, ecological and biodiversity change evaluations, and food security. Global landcover maps can be useful datasets as a reference source and starting points, however, they usually show areas of geographical disagreements when compared to one another. Moreover, the global land cover products mostly generalize different land cover types which may not fit exactly to the specific needs of different projects and user communities. For instance, different types of forests are mostly considered as one category as they are not easy to be differentiated. In this study, we used high-resolution time-series images of Sentinel-2 to produce a local land cover for southwest Ethiopia with focusing on 8 major land cover classes: Forests, Plantations of exotic trees, Woodlands, Home Gardens, Annual crop fields, Grazing Wetlands, Urban areas, and Open water bodies. We also utilized high-resolution google map satellite imagery and the local expert knowledge on the study area to produce an observational dataset for training and validating steps. Different machine learning algorithms, land cover combinations, and seasonal scenarios were also used to produce the best local land cover map for the study area. For this purpose, a two-step approach was implemented to produce the final high-resolution land cover map. Firstly, we produced the best individual maps for each landcover class based on the highest producer accuracy among different scenarios. Then to produce the final land cover map for all land cover classes, all individual maps were combined by using the consumer accuracy index. For this, we found the most accurate land cover class for each pixel based on the highest consumer accuracy across all individually produced maps in the first step. In the end, we evaluated the results by the validation dataset and using different confusion indices. The final high-resolution land cover map produced in this study showed us the combination of remote sensing and local field-based knowledge in cloud computing platforms like google earth engine (GEE) improves the mapping of different land cover classes across southwest Ethiopia.

 

Keywords: Land cover map; Sentinel-2; High resolution; Machine Learning; Google Earth Engine; Ethiopia

How to cite: Vahidi Mayamey, F., Ghajarnia, N., Aminjafari, S., Kalantari, Z., and Hylander, K.: Producing a High-Resolution Land Cover Map for Southwest Ethiopia Using Sentinel-2 Images and Google Earth Engine, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-592, https://doi.org/10.5194/egusphere-egu22-592, 2022.

EGU22-1004 | Presentations | NH6.1 | Highlight

Remote sensing big data characterization of tectonic and hydrological sources of ground deformation in California 

Xie Hu, Roland Bürgmann, and Xiaohua Xu

Although scientific advances have been achieved in every individual geoscience discipline, enabled by more extensive and accurate observations and more robust models, our knowledge of the Earth’s complexity remains limited. California represents an ideal natural laboratory that hosts active tectonics processes associated with the San Andreas fault system and hydrological processes dominated by the Central Valley, which contribute to dynamic surface deformation across the state. The spatiotemporal characteristics and three-dimensional patterns of the tectonic and hydrological sources of ground motions differ systematically. Spatially, interseismic creep is distributed along several strands of the San Andreas Fault (SAF) system. The elastic deformation off the locked faults usually spreads out over tens of kilometers in a long-wavelength pattern. Hydrologically driven displacements are distinct between water-bearing sedimentary basins and the bounding fault structures. Temporarily, both displacement sources involve long-term trends such as from interseismic creep and prolonged climate change. In addition, episodic signals are due to seismic and aseismic fault slip events, seasonal elastic surface and groundwater loading, and poroelastic groundwater volume strain. The orientation of tectonic strain accumulation in California mainly represents a northwest trending shear zone associated with the right-lateral strike-slip SAF system. Hydrological processes mainly deform the Earth vertically while horizontal motions concentrate along the aquifer margins.

We used the time-series ground displacements during 2015-2019 relying on four ascending tracks and five descending tracks of the ESA’s Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) observations. We considered the secular horizontal surface velocities and strain rates, constrained from GNSS measurements and tectonic models, as proxies for tectonic processes. InSAR time series and GNSS velocity maps benefit from the Southern California Earthquake Center (SCEC) Community Geodetic Model (CGM) developments. We further extracted the seasonal displacement amplitudes from InSAR-derived time-series displacements as proxies for hydrological processes. We synergized multidisciplinary remote sensing and auxiliary big data including ground deformation, sedimentary basins, precipitation, soil moisture, topography, and hydrocarbon production fields, using an ensemble, random forest machine learning algorithm. We succeeded in predicting 86%-95% of the representative data sets.

Interestingly, high strain rates along the SAF system mainly occur in areas with a low-to-moderate vegetation fraction, suggesting a correlation of rough/high-relief coastal range morphology and topography with the active faulting, seasonal and orographic rainfall, and vegetation growth. Linear discontinuities in the long-term, seasonal amplitude and phase of the surface displacement fields coincide with some fault strands, the boundary zone between the sediment-fill Central Valley and bedrock-dominated Sierra Nevada, and the margins of the inelastically deforming aquifer in the Central Valley, suggesting groundwater flow interruptions, contrasting elastic properties, and heterogeneous hydrological units.

How to cite: Hu, X., Bürgmann, R., and Xu, X.: Remote sensing big data characterization of tectonic and hydrological sources of ground deformation in California, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1004, https://doi.org/10.5194/egusphere-egu22-1004, 2022.

EGU22-1082 | Presentations | NH6.1 | Highlight

Monitoring of rehabilitation of a raised bog in Ireland using a machine learning model 

Richa Marwaha and Matthew Saunders

Peatlands cover ~3% of the global land area and are under threat from a land-use change such as drainage for peat extraction, and conversion to agriculture and commercial forestry. Historically, peatlands in Ireland have been used for industrial peat extraction and domestic turf cutting. One such example is Cavemount bog, County Offaly, Ireland a former raised bog where peat extraction started in the 1970s and ceased in 2015. After 2015,  a programme of rehabilitation commenced by rewetting the site to raise water levels and to promote the establishment of wetland habitats. Some of the key species associated with the vegetation communities that have been developing across the site include Betula pubescens, Calluna vulgaris, Eriophorum angustifolium, Typha latifolia and Phragmites australis.

To monitor the progress of the colonisation of natural vegetation as part of the rehabilitation plan, reliable habitat maps are required. Google Earth Engine (GEE) is a cloud computing platform where satellite images can be processed to obtain cloud-free composite images. GEE was used to develop an automated approach to map the habitats at Cavemount using multispectral satellite imagery (Sentinel-2) and a machine-learning model i.e. random forest classifier. In this study 9 habitat classes were used which included bare peat, coniferous trees, heather, heather and scrub, open water, pioneer open cutaway habitats, scrub pioneer open cutaway habitats, wetland and mosaic of wetland and scrub. Cloud-free composites for the growing season (May to September) using satellite imagery from 2018-2021 were used to get spectral indices such as NDVI (normalised difference vegetation index), NDWI (normalised difference water index), mNDWI (modified normalised difference water index), red-edge vegetation index, EVI (enhanced vegetation index) and BSI (bare soil index). To extract open water, a seasonal composite of mNDWI was used which could differentiate water from bare peat. The seasonal composite of mNDWI was also used to monitor flooding over winter periods due to increased rainfall and was compared with summer conditions. These indices along with 10 spectral bands (10-20 m resolution) were used as an input to a random forest model, and a yearly habitat map from 2018 to 2021 was developed. The overall accuracy for the testing data from 2018, 2019, 2020 and 2021 was 87.42%, 86.81%, 87.16% and 87.50% and kappa coefficient was 0.81, 0.80, 0.81 and 0.81 respectively. Over time, the former peat extraction area showed a transformation from bare peat to a mosaic of wetland vegetation. This methodology will provide a useful tool for the long-term monitoring of the habitats at this site and to evaluate the effect of rehabilitation on the ecological composition of the site. The final habitat map will also be integrated with the eddy covariance data from the site to provide further insight into the carbon and greenhouse gas dynamics of each habitat in the future.   

How to cite: Marwaha, R. and Saunders, M.: Monitoring of rehabilitation of a raised bog in Ireland using a machine learning model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1082, https://doi.org/10.5194/egusphere-egu22-1082, 2022.

NOAA reported that the sea level has risen by 203-228 mm since 1880 and the rates accelerated to 3.556 mm/year during 2006-2015. Coastal regions, home to about half of the world’s population (~3 billion), are subject to erosion from wind and waves and subsidence from natural compaction and artificial explication of subsurface resources, and are at high risks of floods from accidental storms and inundations from prolonged sea level rise. The vertical land motion (VLM) directly determines the relative sea level rise. To be specific, locally upward VLM can help alleviate the risks while locally downward VLM may hasten the arrival of inundation. Therefore, monitoring coastal VLM is fundamental in coastal resilience and hazard mitigation. 

One 12-floor building, Champlain Towers South, in the Miami suburb of Surfside collapsed catastrophically and claimed 98 lives on June 24th, 2021. No confident conclusion has been drawn on the cause of the collapse, but it might be related to multiple processes from the ground floor pool deck instability, concrete damage, and land subsidence.

Subsidence has been noted in populous Surfside since 1990s. However, we still lack a detailed mapping of the contemporary coastal subsidence. Here we focus on multi-source Synthetic Aperture Radar (SAR) datasets from C-band Sentinel-1 and X-band TerraSAR-X satellite imagery.

We use the time-series SAR interferometry of ascending Sentinel-1 path 48 to extract the VLM from 2015 to 2021. A comparatively stable GPS station ZMA1 obtained from the Nevada Geodetic Laboratory acts as the reference site to calibrate InSAR results. Long-wavelength atmospheric phase screen and orbit errors are approximated by the low-order polynomial fitting. The average subsidence rates derived from stacking can help reduce the temporarily high-frequency noise. A comparison with the GPS network solution can help verify InSAR measurements. Beyond that, we will also rely on high-resolution X-band TerraSAR-X data (Path 36, strip_014) to elaborate VLM details in the building clusters. Beyond that, NOAA reported that the relative sea level increase in Florida is 2.97 mm/year from 1931 to 2020, i.e., >0.3 m in one century. The 2019 Unified Sea Level Rise Projection in Southeast Florida predicted that the sea level in 2024 will rise by 254 to 432 mm in Florida compared to the level in 2000. We aim to extract the high-accuracy VLM to provide scientific evidence for more safe urban planning and effective adaptation strategies in coastal cities, for an ultimate goal of coastal resilience during global climate change.

How to cite: Yu, X. and Hu, X.: Multi-annual InSAR solution of vertical land motion in 2021 lethal building collapse site in Miami, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2114, https://doi.org/10.5194/egusphere-egu22-2114, 2022.

EGU22-3291 | Presentations | NH6.1

Land subsidence in Liaohe River Delta, China due to oil and gas withdrawal, measured from multi-geometry InSAR data 

Wei Tang, Zhiqiang Gong, Jinbao Jiang, and Zhicai Li

Liaohe River Delta (LRD) is one of the major centers for hydrocarbon production, agriculture, and fisheries in Northeastern China. Liaohe Oilfield, located in the deltaic region, is China’s third-largest oilfield with an annual production capacity of 10 million tons of crude oil and 800 million m3 of natural gas. Since its operation in 1970, Liaohe Oilfield had produced more than 480 million tons of crude oil and 88 billion m3 of natural gas by the end of 2019.

Pore pressure drawdown due to oil/gas production has resulted in reservoir compaction and surface subsidence above the reservoir. This compaction and subsidence can cause significant damages to production and surface facilities. Main concerns are related to low-lying coastal areas in the context of eustatic sea-level rise (SLR), where land subsidence contributes to relative SLR and exacerbates flooding hazards. In addition, regional and local land subsidence have combined with global SLR to cause wetland loss in the LRD.

Our main aim in this study is to investigate time-dependent land subsidence induced by reservoir depletion in LRD, by analyzing Synthetic Aperture Radar (SAR) images from Sentinel-1 satellite. We retrieved vertical land subsidence and horizontal displacements through processing and merging multi-geometry images from two ascending and two descending tracks covering the area over the 2017 to 2021 time span. We observed significant local subsidence features in several active production oilfields, and the areal extent of subsidence is basically consistent with the spatial extent of production wells. The most prominent subsidence is occurring in the Shuguang oilfield. Due to reservoir depletion, it forms a land subsidence bowl in an elliptical shape with a major axis of ~6.3 km and a minor axis of ~3.2 km, and the maximum subsidence rate is exceeding 230 mm/yr. Because of the large depth D relative to the areal extent L, that is, a relatively small ratio L/D, the displacement field caused by oil production is three-dimensional. An inward, symmetrical, east-west horizontal movement was observed around the subsidence bowl in Shuguang oilfield, with an average eastward movement rate of ~40 mm/yr and an average westward rate of ~30 mm/yr. This three-dimensional deformation is well reproduced by a cylindrical reservoir compaction/subsidence model.

In September 2021, a storm surge accompanied by heavy rainfall caused water levels to rise by 50-130 cm in Liaodong Bay, resulting in extreme flooding in oilfields along the coast. The most severe flooding hazard was occurring in the Shuguang oilfield with the highest land subsidence rate. Our new InSAR-derived surface subsidence associated with the oilfield operations raises the question of the potential impact of land subsidence on the flood severity. This work highlights the importance of incorporating reservoir depletion-induced subsidence into flood management to ensure the security of the oil and gas industry along the coastal regions.

How to cite: Tang, W., Gong, Z., Jiang, J., and Li, Z.: Land subsidence in Liaohe River Delta, China due to oil and gas withdrawal, measured from multi-geometry InSAR data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3291, https://doi.org/10.5194/egusphere-egu22-3291, 2022.

EGU22-4618 | Presentations | NH6.1

Supervised LSTM Modelling for Classification of Sinkhole-related Anomalous InSAR Deformation Time Series 

Anurag Kulshrestha, Ling Chang, and Alfred Stein

Recently, we have shown that sinkholes can be characterized at an early stage by precursory deformation patterns from InSAR time series [1]. These patterns are often related to sudden changes in deformations or deformation velocities. With such a priori information, accurate deformation modelling and early detection of precursory patterns is feasible. It is still a challenge, however, to scale up methods for classifying larger numbers of sinkholes over large areas that may contain tens of thousands of InSAR observations. To address this, we explore the use of Long Short-Term Memory (LSTM) Networks to classify multi-temporal datasets by learning unique and distinguishable hidden patterns in the deformation time series samples.

We propose to design a two-layered Bi-directional LSTM model and use a supervised classifier to train the model for classifying sinkhole-related anomalous deformation patterns and non-anomalous deformation time series. Samples for linear, Heaviside, and Breakpoint deformation classes are extracted by applying Multiple Hypothesis Testing (MHT) [2] on deformation time series and are used to compile the training dataset. These samples are randomly divided into a training set and a testing set, and associated with a target label using one-hot encoding method. Hyperparameters of the model are tuned over a broad range of commonly used values. Using categorical cross-entropy as the loss function the model is optimized using the Adam optimizer.

We tested our method on an oil extraction field in Wink, Texas, USA, where sinkholes have been continuously evolving since 1980 and a recent sinkhole occurred in mid-2015. We used 52 Sentinel-1 SAR data acquired between 2015 and 2017. The results show that the supervised LSTM model classifies linear deformation samples with an accuracy of ~98%. The accuracy for classifying Heaviside and Breakpoint classes is ~75% at the most. Temporal periodicity was observed in the occurrence of anomalies, which may be related to the frequency of oil extraction and water injection events. Heaviside anomalies were observed to be clustered in space, with a higher density close to the sinkhole location. Breakpoint class anomalies were much more uniformly distributed. Close to the sinkhole spot, we found that two InSAR measurement points were classified into the Breakpoint class, and have considerable changes in deformation velocities (~60o velocity-change angle) shortly before the occurrence of this sinkhole. It is likely associated with the sinkhole-related precursory patterns. Through this study we conclude that our supervised LSTM is an effective classification method to identify anomalies in time. The classification map in terms of InSAR deformation temporal behavior can be used to identify areas which are vulnerable to sinkhole occurrence in the future and require further investigation. In the future, we plan to further develop methods to increase the classification accuracy of anomalous classes.

References:

[1] Anurag Kulshrestha, Ling Chang, and Alfred Stein. Sinkhole Scanner: A New Method to Detect Sinkhole-related Spatio-temporal Patterns in InSAR Deformation Time Series. Remote Sensing, 13(15), 2021.

[2] Ling Chang and Ramon F. Hanssen. A Probabilistic Approach for InSAR Time-Series Postprocessing. IEEE Transactions on Geoscience and Remote Sensing, 54(1):421–430, 2016.

How to cite: Kulshrestha, A., Chang, L., and Stein, A.: Supervised LSTM Modelling for Classification of Sinkhole-related Anomalous InSAR Deformation Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4618, https://doi.org/10.5194/egusphere-egu22-4618, 2022.

EGU22-4800 | Presentations | NH6.1

A methodology for the analysis of InSAR Time Series for the detection of ground deformation events 

Laura Pedretti, Massimiliano Bordoni, Valerio Vivaldi, Silvia Figini, Matteo Parnigoni, Alessandra Grossi, Luca Lanteri, Mauro Tararbra, Nicoletta Negro, and Claudia Meisina

The availability of Sentinel-1 dataset with high-temporal resolution of measures (6-12 days) and long time period, can be considered as a “near-real-time monitoring” since it provides a sampling frequency enough to track the evolution of some ground deformations (e.g. landslides, subsidence) if compared to other sensors. However, the analysis and elaborations of such huge dataset, covering large areas, could be tricky and time-consuming without a first exploitation to identify areas of potential interest for significant ground deformations. The A-InSAR Time Series (TS) interpretation is advantageous to understand the relation between ground movement processes and triggering factors (snow, heavy rainfall), both in areas where it is possible to compare A-InSAR TS with in-situ monitoring instruments, and in areas where in situ instruments are scarce or absent. Exploiting the availability of Sentinel-1 data, this work aims to develop a new methodology ("ONtheMOVE" - InterpolatiON of SAR Time series for the dEtection of ground deforMatiOneVEnts) to classify the trend of TS (uncorrelated, linear, non-linear); to identify breaks in non-linear TS; to provide the descriptive parameters (beginning and end of the break, length in days, cumulative displacement, the average rate of displacement) to characterize the magnitude and timing of changes in ground motion. The methodology has been tested on two Sentinel-1 datasets available from 2014 to 2020 in Piemonte region, in northwestern Italy, an area prone to slow-moving slope instabilities. The methodology can be applied to any type of satellite datasets characterized by low or high-temporal resolution of measures, and it can be tested in any areas to identify any ground instability (slow-moving landslides, subsidence) at local or regional scale. The thresholds used for event detection should be calibrated according to geological and geomorphological processes and characteristics of a specific site or regional site. This innovative methodology provides a supporting and integrated tool with conventional methods for planning and management of the area, furnishing a further validation of the real kinematic behaviour of ground movement processes of each test-site and where it is necessary doing further investigation. In addition, elaboration applied to Sentinel-1 data is helpful both for back analysis and for near real-time monitoring of the territory as regards the characterization and mapping of the kinematics of the ground instabilities, the assessment of susceptibility, hazard and risk.

How to cite: Pedretti, L., Bordoni, M., Vivaldi, V., Figini, S., Parnigoni, M., Grossi, A., Lanteri, L., Tararbra, M., Negro, N., and Meisina, C.: A methodology for the analysis of InSAR Time Series for the detection of ground deformation events, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4800, https://doi.org/10.5194/egusphere-egu22-4800, 2022.

Abstract: Accurate spatial extent changes in urban built-up areas are essential for detecting urbanization, analyzing the drivers of urban development and the impact of urbanization on the environment. In recent years, nighttime light images have been widely used for urban built-up areas extraction, but traditional extraction methods need to be improved in terms of accuracy and automation. In this experiment, a U-Net model was built and trained with the NPP-VIIRS and MOD13A1 data in 2020. We used the optimal tuning model to inverse the spatial extent of built-up areas in China from 2012 to 2021. Through this model, we analyzed the changing trend of built-up areas in China from 2012 to 2021. The results showed that U-Net outperformed random forest (RF) and support vector machine (SVM), with an overall model accuracy (OA) of 0.9969 and mIOU of 0.7342. Built-up areas growth rate is higher in the south and northwest, but the largest growth areas are still concentrated in the east and southeast, which is consistent with China's economic development and urbanization process. This experiment produced a method to extract China's urban built-up areas effectively and rapidly, which provides some reference value for China's urbanization.

How to cite: Bai, M.: Detecting China's urban built-up areas expansion over the last decade based on the deep learning through NPP-VIIRS images, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6822, https://doi.org/10.5194/egusphere-egu22-6822, 2022.

EGU22-7215 | Presentations | NH6.1 | Highlight

Scalable Change Detection in Large Sentinel-2 data with SEVA 

Mike Sips and Daniel Eggert

We present SEVA, a scalable exploration tool that supports users in detecting land-use changes in large optical remote sensing data. SEVA addresses three current scientific and technological challenges of detecting changes in large data sets: a) the automated extraction of relevant changes from many high-resolution optical satellite observations, b) the exploration of spatial and temporal dynamics of the extracted changes, c) interpretation of the extracted changes. To address these challenges, we developed a distributed change detection pipeline. The change detection pipeline consists of a data browser, extraction, error analysis, and interactive exploration component. The data browser supports users to assess the spatial and temporal distribution of available Sentinel-2 images for a region of interest. The extraction component extracts changes from Sentinel-2 images using the post-classification change detection (PCCD) method. The error assessment component supports users in interpreting the relevance of extracted changes with global and local error metrics. The interactive exploration component supports users in investigating the spatial and temporal dynamics of extracted changes. SEVA supports users through interactive visualization in all components of the change detection pipeline.

How to cite: Sips, M. and Eggert, D.: Scalable Change Detection in Large Sentinel-2 data with SEVA, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7215, https://doi.org/10.5194/egusphere-egu22-7215, 2022.

EGU22-7236 | Presentations | NH6.1

Application of remote sensing big data in landslide identification 

Yuqi Song and Xie Hu

Landslides are general natural disasters in the world. Knowledge on the landslide distribution is fundamental for landslide monitoring, disaster mitigation and reduction. Traditional in-situ observations (e.g., leveling, GPS, extensometer, inclinometer) usually have high accuracy, but they are expensive and labor intensive and may also involve risks in the field. Alternatively, remote sensing data can capture the regional land surface features and thus are efficient in landslide mapping. Recent studies on landslide identification mainly rely on the pixel-based or object-oriented classification using optical images. Nonetheless, landslide activities are governed by multiple processes including the topography, geology, land cover, catchment, precipitation, and tectonics (e.g., dynamic shaking or aseismic creeping). Remote sensing data and products are beneficial to extract some of these critical parameters on a regional scale. Rapid development of machine learning algorithms makes it possible to systematically construct landslide inventory by interpreting multi-source remote sensing big data. The populous California suffers from high risks of landsliding. The United States Geological Survey (USGS) compiles the landslide inventory in the State and reports that California has about 86k landslides. Steep slope in the costal ranges, wet climate in the northern California, youthful materials at the surface from active tectonics of the San Andreas Fault and secondary fault systems, dynamic and aseismic movements instigated from the faults all contribute to high landslide susceptibility in California. In May 2017, the steep slopes at Mud Creek on California’s Big Sur coast collapsed catastrophically. During January and February in 2019, several landslides occurred on the southern part of Santa Monica Mountains. In January 2021, a large debris flow hit the Rat Creek in Big Sur due to extreme precipitation. In addition, a fairly complete collection of remote sensing data and products are available in California. Here we use machine learning methods to refine landslides in California using remote sensing big data, including elevation, slope, and aspect derived from SRTM digital elevation models (DEM), the normalized differential vegetation index (NDVI) derived from Landsat 8 OLI images, the hydrometeorological observations, the nearest distance to rivers and faults, the geological and land cover maps, as well as Synthetic Aperture Radar (SAR) images. We will use the archived landslide inventory for model training and testing. We plan to further explore the critical variables in determining landslide occurrences and the inferred triggering mechanisms.

How to cite: Song, Y. and Hu, X.: Application of remote sensing big data in landslide identification, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7236, https://doi.org/10.5194/egusphere-egu22-7236, 2022.

EGU22-7803 | Presentations | NH6.1

Detection of Volcanic Deformations in InSAR Velocity Maps - a contribution to TecVolSA project 

Teo Beker, Homa Ansari, Sina Montazeri, and Qian Song

TecVolSA (Tectonics and Volcanoes in South America) is a project with a goal of developing intelligent Earth Observation (EO) data processing and exploitation for monitoring various geophysical processes in central south American Andes. Large amount of Sentinel-1 data over the period of about 5 years has been processed using mixed Permanent Scatterer and Distributed Scatterer (PS/DS) approaches. The received products are velocity maps with InSAR relative error in the order of 1 mm/yr on a large scale (>100km). The second milestone of the project was automatic extraction of information from the data. In this work, the focus is on detecting volcanic deformations. Since the real data prepared in such manner is limited, to train a deep learning model for detection of volcanic deformations, a synthetic training set is used. Models are trained from scratch and InceptionResNet v2 was selected for further experiments as it was found to give best performance among the tested models. The explainable AI (XAI) techniques were used to understand and analyze the confidence of the model and to understand how to improve it. The models trained on synthetic training set underperformed on real test set. Using GradCAM technique, it was identified that slope induced signal and salt lake deformations were mistakenly identified as volcanic deformations. These patterns are difficult to simulate and were not contained in synthetic training set. Bridging this distribution gap was performed using hybrid synthetic-real fine-tuning set, consisting of the real slope induced signal data and synthetic volcanic data. Additionally, false positive rate of the model is reduced using low-pass spatial filtering of the real test set, and finally by adjustments of the temporal baseline received from a sensitivity analysis. The model successfully detected all 10 deforming volcanoes in the region, ranging from 0.4 - 1.8 cm/yr in deformation.

How to cite: Beker, T., Ansari, H., Montazeri, S., and Song, Q.: Detection of Volcanic Deformations in InSAR Velocity Maps - a contribution to TecVolSA project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7803, https://doi.org/10.5194/egusphere-egu22-7803, 2022.

EGU22-8948 | Presentations | NH6.1 | Highlight

Decrease of anthropogenic emission from aviation and detection of natural hazards with potential application in geosciences using satellite sensors, ground-based networks and model forecasts in the context of the SACS/ALARM early warning system 

Hugues Brenot, Nicolas Theys, Erwin de Donder, Lieven Clarisse, Pierre de Buyl, Nicolas Clerbaux, Simone Dietmüller, Sigrun Matthes, Volker Grewe, Sandy Chkeir, Alessandra Mascitell, Aikaterini Anesiadou, Riccardo Biondi, Igor Mahorčič, Tatjana Bolić, Ritthik Bhattacharya, Tim Winter, Adam Durant, Michel Van Roozendael, and Manuel Soler

Aviation safety can be jeopardised by multiple hazards arising from natural phenomena, e.g., severe weather, aerosols/gases from natural hazard, space weather. Furthermore, there is the anthropogenic emissions and climate impact of aviation, that could be reduced. The use of satellite sensors, ground-based networks, and model forecasts is essential to detect and mitigate the risk of airborne hazards for aviation, as flying through them can have a strong impact on engines (abrasion and damages caused by aerosols) and on the health of passengers (e.g. due to associated hazardous trace gases).

The goal of this work is to give an overview of the alert data products in development in the ALARM SESAR H2020 Exploratory Research project. The overall objective of ALARM (multi-hAzard monitoring and earLy wARning system; https://alarm-project.eu) is to develop a prototype global multi-hazard monitoring and Early Warning System (EWS), building upon SACS (Support to Aviation Control Service; https://sacs.aeronomie.be). This work presents the creation of alert data products, which have a potential use in geosciences (e.g. meteorology, climatology, volcanology). These products include observational data, alert flagging and tailored information (e.g., height of hazard and contamination of flight level – FL). We provide information about the threat to aviation, but also notifications for geoscience applications. Three different manners are produced, i.e., early warning (with geolocation, level of severity, quantification, …), nowcasting (up to 2 hours), and forecasting (from 2 to 48 hours) of hazard evolution at different FLs. Note that nowcasting and forecasting concerns SO2 contamination at FL around selected airports and the risk of environmental hotspots. This study shows the detection of 4 types of risks and weather-related phenomena, for which our EWS generates homogenised NetCDF Alert Products (NCAP) data. The first type is the near real-time detection of recent volcanic plumes, smoke from wildfires, and desert dust clouds, and the interest of combining geostationary and polar orbiting satellite observations. For the second type, ALARM EWS uses satellite and ground-based (GB) observations, and model forecasts to create NCAP related to real-time space weather activity. Exploratory research is developed by ALARM partners to improve detection of a third type of risk, i.e., the initiation of small-scale deep convection (under 2 km) around airports. GNSS data (ground-based networks and radio-occultations), lightning and radar data, are used to implement NCAP data (designed with the objective of bringing relevant information for improving nowcasts around airports). The fourth type is related to the detection of environmental hotspots, which describe regions that are strongly sensitive to aviation emissions. ALARM partners investigate the climate impact of aviation emissions with respect to the actual atmospheric synoptical condition, by relying on algorithmic Climate Change Functions (a-CCFs). These a-CCFs describe the climate impact of individual non-CO2 forcing compounds (contrails, nitrogen oxide and water vapour) as function of time, geographical location and cruise altitude.

Acknowledgements:

ALARM has received funding from the SESAR Joint Undertaking (JU) under grant agreement No 891467. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the SESAR JU members other than the Union.

How to cite: Brenot, H., Theys, N., de Donder, E., Clarisse, L., de Buyl, P., Clerbaux, N., Dietmüller, S., Matthes, S., Grewe, V., Chkeir, S., Mascitell, A., Anesiadou, A., Biondi, R., Mahorčič, I., Bolić, T., Bhattacharya, R., Winter, T., Durant, A., Van Roozendael, M., and Soler, M.: Decrease of anthropogenic emission from aviation and detection of natural hazards with potential application in geosciences using satellite sensors, ground-based networks and model forecasts in the context of the SACS/ALARM early warning system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8948, https://doi.org/10.5194/egusphere-egu22-8948, 2022.

The Arctic region is a very remote and vulnerable ecosystem but also rich in natural resources, which have been exploited for many decades.  These ecosystems are particularly vulnerable to any industrial accident.  The Arctic has short summers, low temperatures, and limited sunlight, so it can take decades for Arctic ecosystems to recover from anthropogenic pollution.  Examples of the potential hazards when exploiting natural resources in such fragile environments and the detrimental impact on the polar ecosystem and communities are all too frequent.  In the case of the oil and gas industry, spills caused by the failure of old pipelines are a very regular occurrence.  Given the geographical isolation of these activities, remote sensing is an obvious technology to underpin any effective monitoring solution.  Increasing availability in the public domain, together with recent advances in resolution, suggest satellite imagery can play a key role in effectively monitoring oil spills and is the focus for this study.

The remote sensing of polar regions and the detection of terrestrial oil spills have both been studied previously, however, there has been little work to investigate the two in combination. The challenge is how to detect an oil spill if it is from an unknown incident or illegal activity such as discharge.  Oil spill detection by applying image processing techniques to Earth Observation (EO) data has historically focused on marine pollution.  Satellite-based Synthetic Aperture Radar (SAR), with its day/night and all-weather capability and wide coverage, has proven to be effective.  Oil spill detection with remote sensing in terrestrial environments has received less attention due to the typically smaller regional scale of terrestrial oil spill contamination together with the overlapping spectral signatures of the impacted vegetation and soils.  SAR has not proven to be very effective onshore because of the false positives and consequent ambiguities associated with interpretation, reflecting the complexity of land cover.

A number of studies have highlighted the potential of airborne hyperspectral sensors for oil spill detection either through the identification of vegetation stress or directly on bare sites, with absorption bands identified in the short-wave infrared (SWIR) range at 1730 and 2300nm.  However, unlike spaceborne sensors, these devices do not provide regular coverage over broad areas.  Several hyperspectral satellites have been launched to date but have technical constraints.  The medium spatial resolution and long revisit times of most current hyperspectral instruments limit their use for identifying smaller incidents that often occur with high unpredictability.

No single sensor currently has all the characteristics required to detect the extent, impact and recovery from onshore oil spills.  This study will look at the potential of combining medium spatial resolution imagery (Sentinel-2) for initial screening, with high spatial/temporal (WorldView-3) and high spectral (PRISMA) resolution data, both covering the key SWIR bands, for site specific analysis.

How to cite: Sadler, G. and Rees, G.: Monitoring anthropogenic pollution in the Russian sub-Arctic with high resolution satellite imagery: An oil spill case study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10041, https://doi.org/10.5194/egusphere-egu22-10041, 2022.

EGU22-10256 | Presentations | NH6.1

Automatic Interferogram Selection for SBAS-InSAR Based on Deep Convolutional Neural Networks 

Yufang He, Guangzong Zhang, Hermann Kaufmann, and Guochang Xu

The small baseline subset of spaceborne interferometric synthetic aperture radar (SBAS-InSAR) technology has become a classical method for monitoring slow deformations through time series analysis with an accuracy in the centimeter or even millimeter range. Additionally, the process of calculating interferograms itself directly affects the accuracy of the SBAS-InSAR measurements, whereby the selection of high-quality interferogram pairs is crucial for SBAS data processing. Especially in the era of big data, the demand for an automatic and effective selection method of high-quality interferograms in SBAS-InSAR technology is growing. However, there are some methods including simulated annealing (SA) searching strategy, the graph theory (GT) and others. Until now, the most effective approach of high-quality interferogram selection still relies on the traditional manual method. Due to the high degree of human interaction and a large risk of repetitive work, this traditional manual method increases the instability and inconsistency of the deformation calculation.
Considering that the different qualities of interference pairs show different color characteristics, the DCNN method is adopted in this study. The ResNet50 model (one of DCNN models) has the advantages of representing a standard network structure and easy programming. The idea is based on the fact that interferograms less contaminated by different noise sources display smaller color phase changes within a certain phase range. Hence, training sets containing almost 3000 interferograms obtained from land subsidences in several subregions of Shenzhen in China with varying contaminations of noise were established. Up next, the ResNet50–DCNN model was set up, the respective parameters were determined through analysis of the data sets trained, and traditional interferogram selection methods were used to evaluate the performance. For simulation experiments and the evaluation and validation of real data, phase unwrapping interferograms obtained by the time-spatial baseline threshold method are used to classify high and low quality interferograms based on the ResNet50 model. The quantity of high quality interferograms extracted by the ResNet50–DCNN method is above 90% for the simulation experiment and above 87% concerning the real data experiment, which reflects the accuracy and reliability of the proposed method. A comparison of the overall surface subsidence rates and the deformation information of local PS points reveals little difference between the land subsidence rates obtained by the ResNet50–DCNN method and the actual simulations or the manual method. 
The proposed advanced method provides an automatized and fast interferogram selection process for high quality data, which contributes significantly to the application of SBAS-InSAR engineering. For future research, we will expand the training samples and study DCNN models to further improve the general accuracy for a wider applicability of this method.

How to cite: He, Y., Zhang, G., Kaufmann, H., and Xu, G.: Automatic Interferogram Selection for SBAS-InSAR Based on Deep Convolutional Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10256, https://doi.org/10.5194/egusphere-egu22-10256, 2022.

The European spruce bark beetle (Ips typographus) is one of the most detrimental insects of the European spruce forests. An effective mitigation measure consists in the removal of infected trees before the beetles leave the bark, which generally happens before the end of June. To minimize economic loss and prevent tree destruction, fast and early detection of European spruce bark beetle is therefore crucial for the future of spruce forests.

In order to detect the forest stressed regions, possibly associated to the beetle infestation, we investigated the forest vigour changes in time. One of the most damaged regions is Northern Italy in which the beetle diffusion has highly increased after the Storm Adrian of late 2018.

In this work we used Sentinel-2 images of a study area in the mountain territory of Val di Fiemme (Trento, Italy) from early 2017 to late 2021. A preliminary field investigation was necessary to localize healthy (green) and stressed (red) trees. NDVI index trends from Sentinel-2 showed an evident vigour discrepancy from green and red regions.

We therefore conceive a classification algorithm based on the slope of fitting lines of NDVI over time. Model accuracy is around 86%. The result is a classified map useful to distinguish stressed and healthy forest areas.

By using the proposed method and Google Earth Engine computational capabilities, we highlight the potential of a simple and effective model to predict and detect forest stressed areas, potentially associated with the diffusion of the European spruce bark beetle.

How to cite: Giomo, M., Moretto, J., and Fantinato, L.: Detection of forest stress from European spruce bark beetle attack in Northern Italy through a stress classification algorithm based on NDVI temporal changes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10630, https://doi.org/10.5194/egusphere-egu22-10630, 2022.

EGU22-10780 | Presentations | NH6.1

Morphometric analysis of volcanic structures using digital elevation models and models developed from radar images in the Apan volcanic field, México. 

Jesús Octavio Ruiz Sánchez, Jesús Eduardo Méndez Serrano, Mariana Patricia Jácome Páz, Nelly Ramírez Serrato, and Nestor López Váldes

The present project aims to make a preliminary assessment of the volcanic risk represented by the Apan Volcanic Field (CVA). The methodology was divided into two parts. In the first, Digital Elevation Models (DEM) published by official sources were used to identify unreported structures and perform morphometric analysis of previously dated structures. In the second stage, a new DEM was developed from interferometric methodologies to compare the results with those obtained from official sources. Two SAR satellite images from the SENTINEL-1 satellite of ESA's Copernicus program were used. Being the first of October 14, 2021, leader image, and the second of October 26, 2021, slave image. These images were processed in ESA's SNAP software. For the morphometric analysis, volcanic structures have been classified into three major categories: Young cones (0.18 Ma - 0.5 Ma), Intermediate cones (0.5 Ma-1 Ma), and Old cones (1 Ma-3 Ma). From the official DEM analysis, 243 volcanic structures were reported within the study area with a preliminary predominance of structures that fall in the range of old cones, 4 areas with a higher concentration of volcanic structures were detected in which some highly populated localities are found. In addition, demographic parameters were used for a better preliminary risk assessment in the study area. Official and Radar images DEMs were used for the morphometric analysis and the results were compared with the previously published models. Finally, it was concluded the importance of the CVA by comparison with other two Mexican volcanic fields CVA represents a moderate volcanic risk, for which a greater number of studies and monitoring in the area is recommended.  This project provides a new understanding of the volcanic hazard and risk associated with the CVA and the development of the surrounding social environment.

How to cite: Ruiz Sánchez, J. O., Méndez Serrano, J. E., Jácome Páz, M. P., Ramírez Serrato, N., and López Váldes, N.: Morphometric analysis of volcanic structures using digital elevation models and models developed from radar images in the Apan volcanic field, México., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10780, https://doi.org/10.5194/egusphere-egu22-10780, 2022.

Traditional fertilization techniques in crop production consist in a homogeneous distribution of inputs all over the cultivated field. Alternatively variable fertilization methods could minimize the environmental impact and increase economic benefits.

The objective of this study is to evaluate the capabilities of a Google Earth Engine code conceived to rapidly study the variability of cultivated fields, for a possible variable fertilization. The tool is semi-automatic as it requires just the field boundary and it gives few outputs ready to be inspected by the user. This work presents an application of this model in a corn field in Northern Italy (province of Venice).

Field variability is evaluated through NDVI index extracted from Sentinel-2 images from 2017 to 2021. For the purpose, the tool provides NDVI statistics, classified maps, classified area percentages, and punctual NDVI trends.

Results show that boundary regions of the field are systematically less vigour than other parts, thus crop production is not efficient. Otherwise, fertilization should be enhanced in internal parts, as they are steadily healthier.

The proposed model is a fast way to analyse field vigour status and Google Earth Engine capabilities permit to apply it nearly all over the world. Field variability and linked variable fertilization are crucial to reduce environmental and increase economic benefits, especially in extensive farming.

How to cite: Moretto, J., Giomo, M., Fantinato, L., and Rasera, R.: Application of a semi-automatic tool for field variability assessment on a cultivated field in Northern Italy to evaluate variable fertilization benefits, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10962, https://doi.org/10.5194/egusphere-egu22-10962, 2022.

EGU22-11589 | Presentations | NH6.1

New advances of the P-SBAS based automatic and unsupervised tool for the co-seismic Sentinel-1 DInSAR products generation 

Fernando Monterroso, Andrea Antonioli, Simone Atzori, Claudio De Luca, Riccardo Lanari, Michele Manunta, Emanuela Valerio, and Francesco Casu

Differential Synthetic Aperture Radar Interferometry (DInSAR) is a key method to estimate, with centimeter accuracy, the earth surface displacements caused by natural events or anthropogenic activities. Furthermore, since 2014 the scientific community can benefit from the huge spaceborne SAR data archives acquired by the Copernicus Sentinel-1 (S1) satellite constellation, which operationally provides SAR data with a free and open data access policy at nearly global scale. By using the S1 acquisitions, an automatic and unsupervised processing tool that generates co-seismic interferograms and LOS displacement maps has been developed. This tool routinely queries two different earthquake catalogs (USGS and INGV) to trigger, in automatic way, the S1 data download and the DInSAR processing through the Parallel Small BAseline Subsets (P-SBAS) algorithm. In particular, in order to guide the algorithm to only intercept the earthquakes which may produce ground displacements detectable through the DInSAR technology, the tool starts the SAR data processing for those events with a magnitude greater than 4.0 in Europe, and greater than 5.5 at a global scale.

We first remark that, in order to optimize the extension of the investigated area, thus reducing the processing time and effectively exploiting the available computing resources, an algorithm for the estimation of the co-seismically affected area has been integrated as first step of the workflow. More specifically, by considering the moment tensors provided by public catalogs (USGS, INGV, Global CMT project), a forward modelling procedure generates the predicted co-seismic displacement field, used by the P-SBAS algorithm to optimize some of the DInSAR processing steps. In particular, the phase unwrapping (PhU) algorithm is applied only to the part of the DInSAR interferograms delimited by the area identified through the predicted scenario and not to the whole S1 scene. In addition, the presented automatic and unsupervised tool has been migrated within a Cloud Computing (CC) environment, specifically the Amazon Web Service (AWS). This strategy allows us a more efficient management of the needed computing resources also in emergency scenario.

The adopted solutions allowed the creation of a worldwide co-seismic maps database. Indeed, by benefiting of the last seven years of Sentinel-1 operation, the tool has generated approximately 6500 interferograms and LOS displacement maps, corresponding to a total of 383 investigated earthquakes.

Note also that the generated interferograms and displacement maps have been made available for the scientific community through the EPOS infrastructure and the Geohazards Exploitation Platform, thus helping scientists and researchers to investigate the dynamics of surface deformation in the seismic zones around the Earth also in the case they have not available specific DInSAR processing capabilities and/or skills.

How to cite: Monterroso, F., Antonioli, A., Atzori, S., De Luca, C., Lanari, R., Manunta, M., Valerio, E., and Casu, F.: New advances of the P-SBAS based automatic and unsupervised tool for the co-seismic Sentinel-1 DInSAR products generation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11589, https://doi.org/10.5194/egusphere-egu22-11589, 2022.

EGU22-11701 | Presentations | NH6.1

Comparative analysis of  the role of labelled benchmark datasets for automatic flood mapping using SAR data 

Dibakar Kamalini Ritushree, Mahdi Motagh, Shagun Garg, and Binayak Ghosh

 The current scenario of the world has witnessed extreme events of floods irrespective of the heterogeneity in the geographical context. The necessity for accurately mapping such events is more of the essence for disaster relief and recovery efforts. The role of satellite imageries from both optical and radar sensors could have immensely benefited the process due to its easier interpretability and high resolution. However, the use of optical sensors for flood extent extraction is limited by weather conditions and the presence of clouds.   In contrast,   SAR sensors have proved to be one of the most powerful tools for flood monitoring due to their potential to observe in all-weather/day-night conditions. The exploitation of SAR in conjunction with optical datasets has shown exemplary results in flood monitoring applications.

With the onset of deep learning and big data, the application of data driven approaches on training models has shown great potential in automatic flood mapping. In order to improve the efficiency of deep learning algorithms at a global scale, publicly available labelled benchmark datasets have been introduced. One of such datasets is Sen1Floods11, that includes raw Sentinel-1 imagery and classified permanent water and flood water, covering 11 flood events. The flood events had coverage from Sentinel-1 and Sentinel-2 imagery on the same day or within 2 days of the Sentinel-1 image from Aug’2016 to May’2019. The other one is WorldFloods that consists of Sentinel-2 data acquired during 119  flood events from Nov’2015 to March’2019. In this study, we make a comparative analysis to investigate the efficiency of these labelled benchmark datasets for automatic flood mapping using SAR data. Various types of flooding in different geographic locations in Europe, Australia, India and Iran  are selected and the segmentation networks are evaluated on existing Sentinel-1 images covering these events.

 

How to cite: Ritushree, D. K., Motagh, M., Garg, S., and Ghosh, B.: Comparative analysis of  the role of labelled benchmark datasets for automatic flood mapping using SAR data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11701, https://doi.org/10.5194/egusphere-egu22-11701, 2022.

EGU22-12127 | Presentations | NH6.1

Methodologies for surface deformations analysis at regional scale 

Micol Fumagalli, Alberto Previati, Serena Rigamonti, Paolo Frattini, and Giovanni B. Crosta

Analysis of ground deformation is particularly demanding when displacement rates are in the range of some mm/y.  This study integrates different statistical techniques to unravel the spatial and temporal patterns of vertical ground deformation in an alluvial basin. Beyond the identification of critical areas, this is also essential to delineate a conceptual model for the uplift and subsidence mechanisms in complex environments such as a layered aquifer suffering strong piezometric oscillations and land use changes due to human activities.

The study area covers about 4000 km2 in the Lombardy region (N Italy) and includes the Milan metropolitan area and a part of the Po alluvial plain between the Como and Varese lakes. In this study, Sentinel-1A (C-band) PS-InSAR data with an average revisiting time 6 days and an average PS distance of 20 m, processed by TRE-Altamira, were analysed to investigate different movement styles in the study area.

The PS-InSAR data ranges from 2015 to 2020 and reveal a wide gently subsiding area oriented in NW-SE direction (average subsiding rate of nearly -1.5 mm/yr along the line of sight). Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were applied on ground deformation and piezometric time series, showing analogue spatial patterns of the fluctuation styles. Then, from the correlations between the spatial patterns of ground motion, groundwater level changes and geological data, and between the temporal patterns of rainfall and groundwater abstraction rates, the main causes of ground motion were identified and summarized in a conceptual model.

Finally, after reconstructing the aquifer composition and the geo-hydro-mechanical properties, and by implementing the hydraulic stresses from the conceptual model, a hydro-mechanical coupled FEM numerical model was developed. This allowed verifying the hypotheses through the comparison between the simulated ground displacement and the measured one.

How to cite: Fumagalli, M., Previati, A., Rigamonti, S., Frattini, P., and Crosta, G. B.: Methodologies for surface deformations analysis at regional scale, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12127, https://doi.org/10.5194/egusphere-egu22-12127, 2022.

EGU22-12269 | Presentations | NH6.1 | Highlight

Time series analysis using global satellite remote sensing data archives for multi-temporal characterization of hazardous surface processes 

Sigrid Roessner, Robert Behling, Mahmud Haghshenas Haghighi, and Magdalena Vassileva

The Earth’s surface hosts a large variety of human habitats being subject to the simultaneous influence of a wide range of dynamic processes. The resulting dynamics are mainly driven by a complex interplay between geodynamic and hydrometeorological factors in combination with manifold human-induced land use changes and related impacts. The resulting effects on the Earth’s surface pose major threats to the population in these areas, especially under the conditions of increasing population pressure and further exploitation of new and remote regions accompanied by ongoing climate changes. This situation leads to significant changes in the type and dimension of natural hazards that have not yet been observed in the past in many of the affected regions.

This situation has been leading to an increasing demand for systematic and regular large area process monitoring which cannot be achieved by ground based observations alone. In this context, the potential of satellite remote sensing has already been investigated for a longer period of time as an approach for assessing dynamic processes on the Earth’s surface for large areas at different spatial and temporal scales. However, until recently these attempts have been largely hampered by the limited availability of suitable satellite remote sensing data at a global scale. During the last years new globally available satellite remote sensing data sources of high spatial and temporal resolution (e.g., Sentinels and Planet) have been increasing this potential to a large extent.

During the last decade, we have been pursuing extensive methodological developments in remote sensing based time series analysis including optical and radar observations with the goal of performing large area and at the same time detailed spatiotemporal analysis of natural hazard prone regions affected by a variety of processes, such as landslides, floods and subsidence. Our methodological developments include among others large-area automated post-failure landslide detection and mapping as well as assessment of the kinematics of pre- and post-failure slope deformation.  Our combined optical and radar remote sensing approaches aim at an improved understanding of spatiotemporal dynamics and complexities related to the evolution of these hazardous processes at different spatial and temporal scales.  We have been developing and applying our methods in a large variety of natural and societal contexts focusing on Central Asia, China and Germany.

We will present selected methodological approaches and results for a variety of hazardous surfaces processes investigated by satellite remote sensing based time series analysis. In this we will focus on the potential of our approaches for supporting the needs and requirements imposed by the disaster management cycle representing a widely used conceptual approach for disaster risk reduction and management including, rapid response, long-term preparedness and early warning.

How to cite: Roessner, S., Behling, R., Haghshenas Haghighi, M., and Vassileva, M.: Time series analysis using global satellite remote sensing data archives for multi-temporal characterization of hazardous surface processes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12269, https://doi.org/10.5194/egusphere-egu22-12269, 2022.

EGU22-12271 | Presentations | NH6.1 | Highlight

Deep learning, remote sensing and visual analytics to support automatic flood detection 

Binayak Ghosh, Shagun Garg, Mahdi Motagh, Daniel Eggert, Mike Sips, Sandro Martinis, and Simon Plank

Floods can have devastating consequences on people, infrastructure, and the ecosystem. Satellite imagery has proven to be an efficient instrument in supporting disaster management authorities during flood events. In contrast to optical remote sensing technology, Synthetic Aperture Radar (SAR) can penetrate clouds, and authorities can use SAR images even during cloudy circumstances. A challenge with SAR is the accurate classification and segmentation of flooded areas from SAR imagery. Recent advancements in deep learning algorithms have demonstrated the potential of deep learning for image segmentation demonstrated. Our research adopted deep learning algorithms to classify and segment flooded areas in SAR imagery. We used UNet and Feature Pyramid Network (FPN), both based on EfficientNet-B7 implementation, to detect flooded areas in SAR imaginary of Nebraska, North Alabama, Bangladesh, Red River North, and Florence. We evaluated both deep learning methods' predictive accuracy and will present the evaluation results at the conference. In the next step of our research, we develop an XAI toolbox to support the interpretation of detected flooded areas and algorithmic decisions of the deep learning methods through interactive visualizations.

How to cite: Ghosh, B., Garg, S., Motagh, M., Eggert, D., Sips, M., Martinis, S., and Plank, S.: Deep learning, remote sensing and visual analytics to support automatic flood detection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12271, https://doi.org/10.5194/egusphere-egu22-12271, 2022.

EGU22-12507 | Presentations | NH6.1

Spatio-temporal analysis of surface displacements in N’Djamena, Chad derived by Persistent Scatter-Interferometric Synthetic Aperture Radar (PS-InSAR) and Small BAseline Subset (SBAS) techniques 

Michelle Rygus, Giulia Tessari, Francesco Holecz, Marie-Louise Vogt, Djoret Daïra, Elisa Destro, Moussa Isseini, Giaime Origgi, Calvin Ndjoh Messina, and Claudia Meisina

High-resolution characterisation of land deformation and its spatio-temporal response to external triggering mechanisms is an important step towards improving geological hazard forecasting and management. The work presented here is part of the ResEau-Tchad project (www.reseau-tchad.org), with a focus on the city of N’Djamena. The extraction of groundwater to sustain this rapidly growing capital city has increased the pressure on water supply and urban sanitation infrastructures which are failing to meet the current water demand. In this study we exploit Synthetic-Aperture Radar (SAR) data acquired by the Sentinel-1 satellite to investigate the temporal variability and spatial extent of land deformation to assist in the development of a sustainable water management program in N’Djamena city. 

The objectives of the work are: 1) to analyse the recent evolution of land deformation using two multi-temporal differential interferometry techniques, SBAS and PS-InSAR; and, 2) to investigate the land deformation mechanism in order to identify the factors triggering surface movements. The PS-InSAR and SBAS techniques are implemented on SAR images obtained in both ascending and descending orbits from April 2015 to May 2021 to generate high resolution deformation measurements representing the total displacement observed at the surface. While the pattern of displacement indicated by the two datasets is similar, the average velocity values obtained with PS-InSAR tend to be noisier than the ones derived using the SBAS technique, particularly when the SBAS time-series shows non-linear deformation trends.

Characterisation of the subsidence areas by means of statistical analyses are implemented to reveal the surface deformation patterns which are related to different geo-mechanical processes. The integration of the spatio-temporal distribution of PS and SBAS InSAR results with geological, hydrological, and hydrogeological data, along with subsurface lithological modelling shows a relationship between vertical displacements, clay sediments, and surface water accumulation. These areas are located mostly in the surroundings of the urban area. The city centre is observed to be mostly stable, which might be the result of the removal of the surface water through the city drainage system. Investigation of the relationship between vertical displacements and seasonal groundwater fluctuations or effects due to the groundwater withdrawal is limited due to the temporally sparse piezometric dataset; however, the recent deformation rates appear to be correlated with the groundwater level trend at some locations.

How to cite: Rygus, M., Tessari, G., Holecz, F., Vogt, M.-L., Daïra, D., Destro, E., Isseini, M., Origgi, G., Ndjoh Messina, C., and Meisina, C.: Spatio-temporal analysis of surface displacements in N’Djamena, Chad derived by Persistent Scatter-Interferometric Synthetic Aperture Radar (PS-InSAR) and Small BAseline Subset (SBAS) techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12507, https://doi.org/10.5194/egusphere-egu22-12507, 2022.

EGU22-12552 | Presentations | NH6.1

Assessment of global burned area satellite products in the African savannah 

Manuel Arbelo, Jose Rafael García-Lázaro, and Jose Andres Moreno-Ruiz

Africa is the continent with the highest annual burned area, with the African savanna being the most affected ecosystem. This paper presents an assessment of the spatio-temporal accuracy of three of the main global-scale burned area products derived from images from polar-orbiting satellite-borne sensors: 1) Fire_CCI 5. 1, of 250 m spatial resolution, developed by the European Space Agency (ESA) and led by the University of Alcalá de Henares; 2) MCD64A1 C6, of 500 m spatial resolution, developed by the University of Maryland; and 3) GABAM (Global Annual Burned Area Map), of 30 m spatial resolution, developed through the Google Earth Engine (GEE) platform by researchers from the Aerospace Information Research Institute of China. The first two products are based on daily images from the MODIS (Moderate-Resolution Imaging Spectroradiometer) sensor onboard NASA's Terra and Aqua satellites, and the third is based on Landsat images available on GEE. The almost total absence of reference burned area data from official sources has made it difficult to assess the spatio-temporal accuracy of these burned area products in Africa. However, the recent creation of the Burned Area Reference Database (BARD), which includes reference datasets from different international projects, opens the possibility for a more detailed assessment. The study focused on a region covering an area of approximately 29.5 million ha located in the southern hemisphere between 10oS and 15oS and bounded longitudinally by the 35oE and 40oE meridians. The results show that the Fire_CCI 5.1, MCD64A1 C6 and GABAM products present an annual distribution of burned area with an irregular pattern in the interval between 7 and 10 million ha per year (around 30% of the whole study area), but there is hardly any correlation between their time series, with correlation coefficients lower than 0.3 for the period 2000-2019. The spatio-temporal accuracy analysis was performed for 2005, 2010 and 2016, the only years for which BARD has reference perimeters. The results are highly variable, with values between 1 and 20 million ha per year depending on the product, the year and the reference set used, which does not allow definitive conclusions to be drawn on the accuracy of the burned area estimates. These results indicate that uncertainties persist both in the burned area estimates derived from remote sensing products in these regions and in the reference sets used for their evaluation, which require further research effort.

How to cite: Arbelo, M., García-Lázaro, J. R., and Moreno-Ruiz, J. A.: Assessment of global burned area satellite products in the African savannah, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12552, https://doi.org/10.5194/egusphere-egu22-12552, 2022.

Many satellite images are corrupted by stripping; this noise degrades the visual quality of the images and inevitably introduces errors in processing. Thermal and hyperspectral images often suffer from stripping. The frequency distribution characteristic of stripe noise makes it difficult to remove such noise in the spatial domain; contrariwise, this noise can be efficiently detected in the frequency domain. Numerous solutions have been proposed to eliminate such noise using Fourier transform; however, most are subjective and time-consuming approaches.

The lack of a fast and automated tool in this subject has motivated us to introduce a Convolutional Neural Network-based tool that uses the U-Net architecture in the frequency domain to suppress the anomalies caused by stripe noise. We added synthetic noise to satellite images to train the model. Then, we taught the network how to mask these anomalies in the frequency domain. The input image dataset was down-sampled to a size of 128 x128 pixels for a fast training time. However, our results suggest that the output mask can be up-scaled and applied on the original Fourier transform of the image and still achieve satisfying results; this means that the proposed algorithm is applicable on images regardless of their size.

After the training step, the U-Net architecture can confidently find the anomalies and create an acceptable bounding mask; the results show that - with enough training data- the proposed procedure can efficiently remove stripe noise from all sorts of images. At this stage, we are trying to further develop the model to detect and suppress more complex synthetic noise. Next, we will focus on removing real stripe noise on satellite images to present a robust tool.

How to cite: Rangzan, M. and Attarchi, S.: Removing Stripe Noise from Satellite Images using Convolutional Neural Networks in Frequency Domain, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12575, https://doi.org/10.5194/egusphere-egu22-12575, 2022.

EGU22-1294 | Presentations | CR2.8

What determines the location of Antarctic blue ice areas? A deep learning approach 

Veronica Tollenaar, Harry Zekollari, Devis Tuia, Benjamin Kellenberger, Marc Rußwurm, Stef Lhermitte, and Frank Pattyn

The vast majority of the Antarctic ice sheet is covered with snow that compacts under its own weight and transforms into ice below the surface. However, in some areas, this typically blue-colored ice is directly exposed at the surface. These so-called "blue ice areas" represent islands of negative surface mass balance through sublimation and/or melt. Moreover, blue ice areas expose old ice that is easily accessible in large quantities at the surface, and some areas contain ice that extends beyond the time scales of classic deep-drilling ice cores.

Observation and modeling efforts suggest that the location of blue ice areas is related to a specific combination of topographic and meteorological factors. In the literature, these factors are described as (i) enhanced katabatic winds that erode snow, due to an increase of the surface slope or a tunneling effect of topography, (ii) the increased albedo of blue ice (with respect to snow), which enhances ablative processes, and (iii) the presence of nunataks (mountains protruding the ice) that act as barriers to the ice flow upstream, and prevent deposition of blowing snow on the lee side of the mountain. However, it remains largely unknown which role the physical processes play in creating and/or maintaining  blue ice at the surface of the ice sheet.

Here, we study how a combination of environmental and topographic factors lead to the observation of blue ice. We also quantify the relevance of the single processes and build an interpretable model aiming at not only predicting blue ice presence, but also explaining why it is there. To do so, data is fed into a convolutional neural network, a machine learning algorithm which uses the spatial context of the data to generate a prediction on the presence of blue ice areas. More specifically, we use a U-Net architecture that through convolutions and linked up-convolutions allows to obtain a semantic segmentation (i.e., a pixel-level map) of the input data. Ground reference data is obtained from existing products of blue ice area outlines that are based on multispectral observations. These products contain considerable uncertainties, as (i) the horizontal change from snow to ice is gradual and a single threshold in this transition is not applicable uniformly over the continent, and (ii) the blue ice area extent is known to vary seasonally. Therefore, we train our deep learning model with a loss function with increasing weight towards the center of blue ice areas.

Our first results indicate that the neural network predicts the location of blue ice relatively well, and that surface elevation data plays an important role in determining the location of blue ice. In our ongoing work, we analyze both the predictions and the neural network itself to quantify which factors posses predictive capacity to explain the location of blue ice. Eventually this information may allow us to answer the simple yet important question of why blue ice areas are located where they are, with potentially important implications for their role as paleoclimate archives and for their evolution under changing climatic conditions.

How to cite: Tollenaar, V., Zekollari, H., Tuia, D., Kellenberger, B., Rußwurm, M., Lhermitte, S., and Pattyn, F.: What determines the location of Antarctic blue ice areas? A deep learning approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1294, https://doi.org/10.5194/egusphere-egu22-1294, 2022.

EGU22-2726 | Presentations | CR2.8 | Highlight

Dissecting Glaciers - Can an Automated Bio-Medical Image Segmentation Tool also Segment Glaciers? 

Nora Gourmelon, Thorsten Seehaus, Matthias Braun, Andreas Maier, and Vincent Christlein

The temporal variability of glacier calving front positions provides essential information about the state of marine-terminating glaciers. These positions can be extracted from Synthetic Aperture Radar (SAR) images throughout the year. To automate this extraction, we apply deep learning techniques that segment the SAR images into different classes: glacier; ocean including ice-melange and sea-ice covered ocean; rock outcrop; and regions with no information like areas outside the SAR swath, layover regions and SAR shadow. The calving front position can be derived from these regions during post-processing.   
A downside of deep learning is that hyper-parameters need to be tuned manually. For this tuning, expert knowledge and experience in deep learning are required. Furthermore, the fine-tuning process takes up much time, and the researcher needs to have programming skills.
    
In the biomedical imaging domain, a deep learning framework [1] has become increasingly popular for image segmentation. The nnU-Net can be used out-of-the-box. It automatically adapts the U-Net, the state-of-the-art architecture for image segmentation, to different datasets and segmentation tasks. Hence, no more manual tuning is required. The framework outperforms specialized deep learning pipelines in a multitude of public biomedical segmentation competitions.   
We apply the nnU-Net to the task of glacier segmentation, investigating whether the framework is also beneficial in the domain of remote sensing. Therefore, we train and test the nnU-Net on CaFFe (https://github.com/Nora-Go/CaFFe), a benchmark dataset for automatic calving front detection on SAR images. CaFFe comprises geocoded, orthorectified imagery acquired by the satellite missions RADARSAT-1, ERS-1/2, ALOS PALSAR, TerraSAR-X, TanDEM-X, Envisat, and Sentinel-1, covering the period 1995 - 2020. The ground range resolution varies between 7 and 20 m2. The nnU-Net learns from the multi-class "zones" labels provided with the dataset. We adopt the post-processing scheme from Gourmelon et al. [2] to extract the front from the segmented landscape regions. The test set includes images from the Mapple Glacier located on the Antarctic Peninsula and the Columbia Glacier in Alaska. The nnU-Net's calving front predictions for the Mapple Glacier lie close to the ground truth with just 125 m mean distance error. As the Columbia Glacier shows several calving front sections, its segmentation is more difficult than that of the laterally constrained Mapple Glacier. This complexity of the calving fronts is also reflected in the results: Predictions for the Columbia Glacier show a mean distance error of 635 m. Concludingly, the results demonstrate that the nnU-Net holds considerable potential for the remote sensing domain, especially for glacier segmentation.
    
[1] Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 18, 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z 

[2] Gourmelon, N., Seehaus, T., Braun, M., Maier, A., Christlein, V.: Calving Fronts and Where to Find Them: A Benchmark Dataset and Methodology for Automatic Glacier Calving Front Extraction from SAR Imagery, In Prep.

How to cite: Gourmelon, N., Seehaus, T., Braun, M., Maier, A., and Christlein, V.: Dissecting Glaciers - Can an Automated Bio-Medical Image Segmentation Tool also Segment Glaciers?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2726, https://doi.org/10.5194/egusphere-egu22-2726, 2022.

EGU22-2904 | Presentations | CR2.8

Automated mapping of Eastern Himalayan glacial lakes using deep learning and multisource remote sensing data 

Saurabh Kaushik, Tejpal Singh, Pawan Kumar Joshi, and Andreas J Dietz

The Himalayan glacierized region has experienced a substantial rise in number and area of glacial lakes in the past two decades. These glacial lakes directly influence glacier melt, velocity, geometry, and thus overall response of the glacier to climate change. The sudden release of water from these glacial lakes poses a severe threat to downstream communities and infrastructure. Thereby, regular monitoring and modelling of these lakes bear significance in order to understand regional climate change, and mitigating the anticipated impact of glacial lake outburst flood. Here, we proposed an automated scheme for Himalayan glacial lake extent mapping using multisource remote sensing data and a state-of-the-art deep learning technique. A combination of multisource remote sensing data [Synthetic Aperture Radar (SAR) coherence, thermal, visible, near-infrared, shortwave infrared, Advanced Land Observing Satellite (ALOS) DEM, surface slope and Normalised Difference Water Index (NDWI)] is used as input to a fully connected feed-forward Convolutional Neural Network (CNN). The CNN is trained on 660 images (300×300×10) collected from 11 sites spread across Himalaya. The CNN architecture is designed for choosing optimum size, number of hidden layers, convolutional layers, filters, and other hypermeters using hit and trial method. The model performance is evaluated over 3 different sites of Eastern Himalaya, representing heterogenous landscapes. The novelty of the presented automated scheme lies in its spatio-temporal transferability over the large geographical region (~8477, 10336 and 6013 km2). The future work involves Intra-annual lake extent mapping across High-Mountain Asian region in an automated fashion.

Keywords: Glacial Lake, convolutional neural network, semantic segmentation, remote sensing, Himalaya, SAR and climate change

How to cite: Kaushik, S., Singh, T., Joshi, P. K., and Dietz, A. J.: Automated mapping of Eastern Himalayan glacial lakes using deep learning and multisource remote sensing data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2904, https://doi.org/10.5194/egusphere-egu22-2904, 2022.

EGU22-3446 | Presentations | CR2.8

The AI-CORE Project - Artificial Intelligence for Cold Regions 

Andreas Dietz and Celia Baumhoer and the AI-CORE Team

Artificial Intelligence for Cold Regions (AI-CORE) is a collaborative approach for applying Artificial Intelligence (AI) methods in the field of remote sensing of the cryosphere. Several research institutes (German Aerospace Center, Alfred-Wegener-Institute, Technical University Dresden) bundled their expertise to jointly develop AI-based solutions for pressing geoscientific questions in cryosphere research. The project addresses four geoscientific use cases such as the change pattern identification of outlet glaciers in Greenland, the object identification in permafrost areas, the detection of calving fronts in Antarctica and the firn-line detection on glaciers. Within this presentation, the four AI-based final approaches for each addressed use case will be presented and exemplary results will be shown. Further on, the implementation of all developed AI-methods in three different computer centers was realized and the lessons learned from implementing several ready-to-use AI-tools in different processing infrastructures will be discussed. Finally, a best-practice example for sharing AI-implementations between different institutes is provided along with opportunities and challenges faced during the present project duration.

How to cite: Dietz, A. and Baumhoer, C. and the AI-CORE Team: The AI-CORE Project - Artificial Intelligence for Cold Regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3446, https://doi.org/10.5194/egusphere-egu22-3446, 2022.

EGU22-3701 | Presentations | CR2.8 | Highlight

Snow accumulation over the world's glaciers (1981-2021) inferred from climate reanalyses and machine learning 

Matteo Guidicelli, Marco Gabella, Matthias Huss, and Nadine Salzmann

The scarcity and limited accuracy of snow and precipitation observation and estimation in high-mountain regions reduce our understanding of climatic-cryospheric processes. Thus, we compared the snow water equivalent (SWE) from winter mass balance observations of 95 glaciers distributed over the Alps, Canada, Central Asia and Scandinavia, with the cumulative gridded precipitation data from the ERA-5 and the MERRA-2 reanalysis products. We propose a machine learning model to downscale the gridded precipitation from the reanalyses to the altitude of the glaciers. The machine learning model is a gradient boosting regressor (GBR), which combines several meteorological variables from the reanalyses (air temperature and relative humidity are also downscaled to the altitude of the glaciers) and topographical parameters. Among the most important variables selected by the GBR model, are the downscaled relative humidity and the downscaled air temperature. These GBR-derived estimates are evaluated against the winter mass balance observations by means of a leave-one-glacier-out cross-validation (site-independent GBR) and a leave-one-season-out cross-validation (season-independent GBR). The estimates downscaled by the GBR show lower biases and higher correlations with the winter mass balance observations than downscaled estimates derived with a lapse-rate-based approach. Finally, the GBR estimates are used to derive SWE trends between 1981 and 2021 at high-altitudes. The trends obtained from the GBRs are more enhanced than those obtained from the gridded precipitation of the reanalyses. When the data is regrouped regionwide, significant trends are only observed for the Alps (positive) and for Scandinavia (negative), while significant positive or negative trends are observed in all the regions when looking locally at single glaciers and specific elevations. Positive (negative) SWE trends are typically observed at higher (lower) elevations, where the impact of rising temperatures is less (more) dominating.

How to cite: Guidicelli, M., Gabella, M., Huss, M., and Salzmann, N.: Snow accumulation over the world's glaciers (1981-2021) inferred from climate reanalyses and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3701, https://doi.org/10.5194/egusphere-egu22-3701, 2022.

EGU22-5317 | Presentations | CR2.8

Point Mass Balance Regression using Deep Neural Networks: A Transfer Learning Approach 

Ritu Anilkumar, Rishikesh Bharti, and Dibyajyoti Chutia

The last few years have seen an increasing number of studies modeling glacier evolution using deep learning. Most of these techniques have focussed on artificial neural networks (ANN) that are capable of providing a regressed value of mass balance using topographic and meteorological input features. The large number of parameters in an ANN demands a large dataset for training the parameter values. This is relatively difficult to achieve for regions with a sparse in-situ data measurement set up such as the Himalayas. For example, of the 14326 point mass balance measurements obtained from the Fluctuations of Glaciers database for the period of 1950-2020 for glaciers between 60S and 60N, a mere 362 points over four glaciers exist for the Himalayan region. These are insufficient to train complex neural network architectures over the region. We attempt to overcome this data hurdle by using transfer learning. Here, the parameters are first trained over the 9584 points in the Alps following which the weights were used for retraining for the Himalayan data points. Fourteen meteorological from the ERA5Land monthly averaged reanalysis data were used as input features for the study. A 70-30 split of the training and testing set was maintained to ensure the authenticity of the accuracy estimates via independent testing. Estimates are assessed on a glacier scale in the temporal domain to assess the feasibility of using deep learning to fill temporal gaps in data. Our method is also compared with other machine learning algorithms such as random forest-based regression and support vector-based regression and we observe that the complexity of the dataset is better represented by the neural network architecture. With an overall normalized root mean squared loss consistently less than 0.09, our results suggest the capability of deep learning to fill the temporal data gaps over the glaciers and potentially reduce the spatial gap on a regional scale.

How to cite: Anilkumar, R., Bharti, R., and Chutia, D.: Point Mass Balance Regression using Deep Neural Networks: A Transfer Learning Approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5317, https://doi.org/10.5194/egusphere-egu22-5317, 2022.

EGU22-5612 | Presentations | CR2.8

Retrieving freeze/thaw-cycles using Machine Learning approach in Nunavik (Québec, Canada) 

Yueli Chen, Lingxiao Wang, Monique Bernier, and Ralf Ludwig

In the terrestrial cryosphere, freeze/thaw (FT) state transition plays an important and measurable role for climatic, hydrological, ecological, and biogeochemical processes in permafrost landscapes.

Satellite active and passive microwave remote sensing has shown its principal capacity to provide effective monitoring of landscape FT dynamics. Many algorithms have been developed and evaluated over time in this scope. With the advancement of data science and artificial intelligence methods, the potential of better understanding the cryosphere is emerging.

This work is dedicated to exploring an effective approach to retrieve FT state based on microwave remote sensing data using machine learning methods, which is expected to fill in some hidden blind spots in the deterministic algorithms. Time series of remote sensing data will be created as training data. In the initial stage, the work aims to test the feasibility and establish the basic neural network based on fewer training factors. In the advanced stage, we will improve the model in terms of structure, such as adding more complex dense layers and testing optimizers, and in terms of discipline, such as introducing more influencing factors for training. Related parameters, for example, land cover types, will be included in the analysis to improve the method and understanding of FT-related processes.

How to cite: Chen, Y., Wang, L., Bernier, M., and Ludwig, R.: Retrieving freeze/thaw-cycles using Machine Learning approach in Nunavik (Québec, Canada), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5612, https://doi.org/10.5194/egusphere-egu22-5612, 2022.

EGU22-5910 | Presentations | CR2.8

Learning and screening of neural networks architectures for sub-grid-scale parametrizations of sea-ice dynamics from idealised twin experiments 

Tobias Finn, Charlotte Durand, Alban Farchi, Marc Bocquet, Yumeng Chen, Alberto Carrassi, and Veronique Dansereau

In this talk, we propose to use neural networks in a hybrid modelling setup to learn sub-grid-scale dynamics of sea-ice that cannot be resolved by geophysical models. The multifractal and stochastic nature of the sea-ice dynamics create significant obstacles to represent such dynamics with neural networks. Here, we will introduce and screen specific neural network architectures that might be suited for this kind of task. To prove our concept, we perform idealised twin experiments in a simplified Maxwell-Elasto-Brittle sea-ice model which includes only sea-ice dynamics within a channel-like setup. In our experiments, we use high-resolution runs as proxy for the reality, and we train neural networks to correct errors of low-resolution forecast runs.

Since we perform the two kind of runs on different grids, we need to define a projection operator from high- to low-resolution. In practice, we compare the low-resolution forecasted state at a given time to the projected state of the high resolution run at the same time. Using a catalogue of these forecasted and projected states, we will learn and screen different neural network architectures with supervised training in an offline learning setting. Together with this simplified training, the screening helps us to select appropriate architectures for the representation of multifractality and stochasticity within the sea-ice dynamics. As a next step, these screened architectures have to be scaled to larger and more complex sea-ice models like neXtSIM.

How to cite: Finn, T., Durand, C., Farchi, A., Bocquet, M., Chen, Y., Carrassi, A., and Dansereau, V.: Learning and screening of neural networks architectures for sub-grid-scale parametrizations of sea-ice dynamics from idealised twin experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5910, https://doi.org/10.5194/egusphere-egu22-5910, 2022.

EGU22-6948 | Presentations | CR2.8

Mapping Glacier Basal Sliding with Beamforming and Artificial Intelligence 

Josefine Umlauft, Philippe Roux, Albanne Lecointre, Florent Gimbert, Ugo Nanni, Andrea Walpersdorf, Bertrand Rouet-LeDuc, Claudia Hulbert, Daniel Trugman, and Paul Johnson

The cryosphere is a highly active and dynamic environment that rapidly responds to changing climatic conditions. In particular, the physical processes behind glacial dynamics are poorly understood because they remain challenging to observe. Glacial dynamics are strongly intermittent in time and heterogeneous in space. Thus, monitoring with high spatio-temporal resolution is essential.

In course of the RESOLVE (‘High-resolution imaging in subsurface geophysics : development of a multi-instrument platform for interdisciplinary research’) project, continuous seismic observations were obtained using a dense seismic network (100 nodes, Ø 700 m) installed on Glacier d’Argentière (French Alpes) during May in 2018. This unique data set offers the chance to study targeted processes and dynamics within the cryosphere on a local scale in detail.

 

To identify seismic signatures of ice beds in the presence of melt-induced microseismic noise, we applied the supervised ML technique gradient tree boosting. The approach has been proven suitable to directly observe the physical state of a tectonic fault. Transferred to glacial settings, seismic surface records could therefore reveal frictional properties of the ice bed, offering completely new means to study the subglacial environment and basal sliding, which is difficult to access with conventional approaches.

We built our ML model as follows: Statistical properties of the continuous seismic records (variance, kurtosis and quantile ranges), meteorological data and a seismic source catalogue obtained using beamforming (matched field processing) serve as features which we fit to measures of the GPS displacement rate of Glacier d’Argentière (labels). Our preliminary results suggest that seismic source activity at the bottom of the glacier strongly correlates with surface displacement rates and hence, is directly linked to basal motion. By ranking the importance of our input features, we have learned that other than for reasonably long monitoring time series along tectonic faults, statistical properties of seismic observations only do not suffice in glacial environments to estimate surface displacement. Additional beamforming features however, are a rich archive that enhance the ML model performance considerably and allow to directly observe ice dynamics.

How to cite: Umlauft, J., Roux, P., Lecointre, A., Gimbert, F., Nanni, U., Walpersdorf, A., Rouet-LeDuc, B., Hulbert, C., Trugman, D., and Johnson, P.: Mapping Glacier Basal Sliding with Beamforming and Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6948, https://doi.org/10.5194/egusphere-egu22-6948, 2022.

EGU22-8945 | Presentations | CR2.8

Ice Lead Network Analysis 

Julia Kaltenborn, Venkatesh Ramesh, and Thomas Wright

Ice lead analysis is an essential task for evaluating climate change processes in the Arctic. Ice leads are narrow cracks in the sea-ice, which build a complex network. While detecting and modeling ice leads has been performed in numerous ways based on airborne images, the dynamics of ice leads over time remain hidden and largely unexplored. These dynamics could be analyzed by interpreting the ice leads as more than just airborne images, but as what they really are: a dynamic network. The lead’s start, end, and intersection points can be considered nodes, and the leads themselves as edges of a network. As the nodes and edges change over time, the ice lead network is constantly evolving. This new network perspective on ice leads could be of great interest for the cryospheric science community since it opens the door to new methods. For example, adapting common link prediction methods might make data-driven ice lead forecasting and tracking feasible.
To reveal the hidden dynamics of ice leads, we performed a spatio-temporal and network analysis of ice lead networks. The networks used and presented here are based on daily ice lead observations from Moderate Resolution Imaging Spectroradiometer (MODIS) between 2002 and 2020 by Hoffman et al. [1].
The spatio-temporal analysis of the ice leads exhibits seasonal, annual, and overall trends in the ice lead dynamics. We found that the number of ice leads is decreasing, and the number of width and length outliers is increasing overall. The network analysis of the ice lead graphs reveals unique network characteristics that diverge from those present in common real-world networks. Most notably, current network science methods (1) exploit the information that is embedded into the connections of the network, e.g., in connection clusters, while (2) nodes remain relatively fixed over time. Ice lead networks, however, (1) embed their relevant information spatially, e.g., in spatial clusters, and (2) shift and change drastically. These differences require improvements and modifications on common graph classification and link prediction methods such as Preferential Attachment and EvolveGCN on the domain of ice lead dynamic networks.
This work is a call for extending existing network analysis toolkits to include a new class of real-world dynamic networks. Utilizing network science techniques will hopefully further our understanding of ice leads and thus of Arctic processes that are key to climate change mitigation and adaptation.

Acknowledgments

We would like to thank Prof. Gunnar Spreen, who provided us insights into ice lead detection and possible challenges connected to the project idea. Furthermore, we would like to thank Shenyang Huang and Asst. Prof. David Rolnick for their valuable feedback and support. J.K. was supported in part by the DeepMind scholarship, the Mitacs Globalink Graduate Fellowship, and the German Academic Scholarship Foundation.

References

[1] Jay P Hoffman, Steven A Ackerman, Yinghui Liu, and Jeffrey R Key. 2019. The detection and characterization of Arctic sea ice leads with satellite imagers. Remote Sensing 11, 5 (2019), 521.

How to cite: Kaltenborn, J., Ramesh, V., and Wright, T.: Ice Lead Network Analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8945, https://doi.org/10.5194/egusphere-egu22-8945, 2022.

EGU22-9753 | Presentations | CR2.8

Using LSTM on surface data to reconstruct 3D Temperature & Salinity profiles in the Arctic Ocean 

Mathias Jensen, Casper Bang-Hansen, Ole Baltazar Andersen, Carsten Bjerre Ludwigsen, and Mads Ehrhorn

In recent years, the importance of dynamics in the Arctic Ocean have proven itself with respect to climate monitoring and modelling. Data used for creating models often include temperature & salinity profiles. Such profiles in the Arctic region are sparse and acquiring new data is expensive and time-consuming. Thus, efficient methods of interpolation are necessary to expand regional data. In this project, 3D temperature & salinity profiles are reconstructed using 2D surface measurements from ships, floats and satellites. The technique is based on a stacked Long Short-Term Memory (LSTM) neural network. The goal is to be able to reconstruct the profiles using remotely sensed data.

How to cite: Jensen, M., Bang-Hansen, C., Andersen, O. B., Ludwigsen, C. B., and Ehrhorn, M.: Using LSTM on surface data to reconstruct 3D Temperature & Salinity profiles in the Arctic Ocean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9753, https://doi.org/10.5194/egusphere-egu22-9753, 2022.

EGU22-10386 | Presentations | CR2.8

Arctic sea ice dynamics forecasting through interpretable machine learning 

Matteo Sangiorgio, Elena Bianco, Doroteaciro Iovino, Stefano Materia, and Andrea Castelletti

Machine Learning (ML) has become an increasingly popular tool to model the evolution of sea ice in the Arctic region. ML tools produce highly accurate and computationally efficient forecasts on specific tasks. Yet, they generally lack physical interpretability and do not support the understanding of system dynamics and interdependencies among target variables and driving factors.

Here, we present a 2-step framework to model Arctic sea ice dynamics with the aim of balancing high performance and accuracy typical of ML and result interpretability. We first use time series clustering to obtain homogeneous subregions of sea ice spatiotemporal variability. Then, we run an advanced feature selection algorithm, called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS), to process the sea ice time series barycentric of each cluster. W-QEISS identifies neural predictors (i.e., extreme learning machines) of the future evolution of the sea ice based on past values and returns the most relevant set of input variables to describe such evolution.

Monthly output from the Pan-Arctic Ice-Ocean Modeling and Assimilation System (PIOMAS)  from 1978 to 2020 is used for the entire Arctic region. Sea ice thickness represents the target of our analysis, while sea ice concentration, snow depth, sea surface temperature and salinity are considered as candidate drivers.

Results show that autoregressive terms have a key role in the short term (with lag time 1 and 2 months) as well as the long term (i.e., in the previous year); salinity along the Siberian coast is frequently selected as a key driver, especially with a one-year lag; the effect of sea surface temperature is stronger in the clusters with thinner ice; snow depth is relevant only in the short term.

The proposed framework is an efficient support tool to better understand the physical process driving the evolution of sea ice in the Arctic region.

How to cite: Sangiorgio, M., Bianco, E., Iovino, D., Materia, S., and Castelletti, A.: Arctic sea ice dynamics forecasting through interpretable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10386, https://doi.org/10.5194/egusphere-egu22-10386, 2022.

EGU22-10637 | Presentations | CR2.8

A deep learning approach for mapping and monitoring glacial lakes from space 

Manu Tom, Holger Frey, and Daniel Odermatt

Climate change intensifies glacier melt which effectively leads to the formation of numerous new glacial lakes in the overdeepenings of former glacier beds. Additionally, the area of many existing glacial lakes is increasing. More than one thousand glacial lakes have emerged in Switzerland since the Little Ice Age, and hundreds of lakes are expected to form in the 21st century. Rapid deglaciation and formation of new lakes severely affect downstream ecosystem services, hydropower production and high-alpine hazard situations. Day by day, glacier lake inventories for high-alpine terrains are increasingly becoming available to the research community. However, a high-frequency mapping and monitoring of these lakes are necessary to assess hazards and to estimate Glacial Lake Outburst Flood (GLOF) risks, especially for lakes with high seasonal variations. One way to achieve this goal is to leverage the possibilities of satellite-based remote sensing, using optical and Synthetic Aperture Radar (SAR) satellite sensors and deep learning.

There are several challenges to be tackled. Mapping glacial lakes using satellite sensors is difficult, due to the very small area of a great majority of these lakes. The inability of the optical sensors (e.g. Sentinel-2) to sense through clouds creates another bottleneck. Further challenges include cast and cloud shadows, and increased levels of lake and atmospheric turbidity. Radar sensors (e.g. Sentinel-1 SAR) are unaffected by cloud obstruction. However, handling cast shadows and natural backscattering variations from water surfaces are hurdles in SAR-based monitoring. Due to these sensor-specific limitations, optical sensors provide generally less ambiguous but temporally irregular information, while SAR data provides lower classification accuracy but without cloud gaps.

We propose a deep learning-based SAR-optical satellite data fusion pipeline that merges the complementary information from both sensors. We put forward to use Sentinel-1 SAR and Sentinel-2 L2A imagery as input to a deep network with a Convolutional Neural Network (CNN) backbone. The proposed pipeline performs a fusion of information from the two input branches that feed heterogeneous satellite data. A shared block learns embeddings (feature representation) invariant to the input satellite type, which are then fused to guide the identification of glacial lakes. Our ultimate aim is to produce geolocated maps of the target regions where the proposed bottom-up, data-driven methodology will classify each pixel either as lake or background.

This work is part of two major projects: ESA AlpGlacier project that targets mapping and monitoring of the glacial lakes in the Swiss (and European) Alps, and the UNESCO (Adaptation Fund) GLOFCA project that aims to reduce the vulnerabilities of populations in the Central Asian countries (Kazakhstan, Tajikistan, Uzbekistan, and Kyrgyzstan) from GLOFs in a changing climate. As part of the GLOFCA project, we are developing a python-based analytical toolbox for the local authorities, which incorporates the proposed deep learning-based pipeline for mapping and monitoring the glacial lakes in the target regions in Central Asia.

How to cite: Tom, M., Frey, H., and Odermatt, D.: A deep learning approach for mapping and monitoring glacial lakes from space, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10637, https://doi.org/10.5194/egusphere-egu22-10637, 2022.

EGU22-12785 | Presentations | CR2.8

Machine learning tools for pattern recognition in polar climate science 

William Gregory

Over the past four decades, the inexorable growth in technology and subsequently the availability of Earth-observation and model data has been unprecedented. Hidden within these data are the fingerprints of the physical processes that govern climate variability over a wide range of spatial and temporal scales, and it is the task of the climate scientist to separate these patterns from noise. Given the wealth of data now at our disposal, machine learning methods are becoming the tools of choice in climate science for a variety of applications ranging from data assimilation, to sea ice feature detection from space. This talk summarises recent developments in the application of machine learning methods to the study of polar climate, with particular focus on Arctic sea ice. Supervised learning techniques including Gaussian process regression, and unsupervised learning techniques including cluster analysis and complex networks, are applied to various problems facing the polar climate community at present, where each application can be considered an individual component of the larger sea ice prediction problem. These applications include: seasonal sea ice forecasting, improving spatio-temporal data coverage in the presence of sparse satellite observations, and illuminating the spatio-temporal connectivity between climatological processes.

How to cite: Gregory, W.: Machine learning tools for pattern recognition in polar climate science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12785, https://doi.org/10.5194/egusphere-egu22-12785, 2022.

EGU22-12882 | Presentations | CR2.8

Inverse modelling techniques for snow and ice thickness retrievals from satellite altimetry  

Joel Perez Ferrer, Michel Tsamados, Matthew Fox, Tudor Suciu, Harry Heorton, and Carmen Nab

We have recently applied an objective mapping type approach to merge observations from multiple altimeters, both for enhancing the temporal/spatial resolution of freeboard samples and for analyzing crossovers between satellites (Gregory et al, 2021). This mapping provides optimal interpolation of proximal observations to a location in space and time based on the covariance of the observations and a priori understanding of their spatiotemporal correlation length scales. This offers a best linear estimator and error field for the observation (radar freeboard or snow depth), which can be used to better constrain pan-Arctic uncertainties. 

 

In addition we will explore here a newly developed inverse modelling framework  to synchronously retrieve the snow and ice thickness from bias corrected or calibrated radar freeboards from multiple satellite retrievals. The radar equations expressed in section can be rearranged to formulate the joint forward model at gridded level relating measured radar freeboards from multiple satellites (and airborne data) to the underlying snow and ice thickness. In doing so we have also introduced a penetration factor correction term for OIB radar freeboard measurements. To solve this inverse model problem for  and  we use the following two methodologies inspired from Earth Sciences applications (i.e. seismology):  

 

Space ‘uncorrelated’ inverse modelling. The method is called `space uncorrelated' inverse modelling as the algorithm is applied locally, for small distinct regions in the Arctic Ocean, multiple times, until the entire Arctic ocean is covered. To sample the parameter space  we use the publicly available Neighbourhoud Algorithm (NA) developed originally for seismic tomography of Earth’s interior and recently by us to a sea ice dynamic inversion problem (Hoerton et al, 2019).   

 

Space ‘correlated inverse modelling. For the second method of inverse modelling, we used what we call a `space correlated' approach. Here the main algorithm is applied over the entire Arctic region, aiming to retrieve the desired parameters at once. In contrast with the previous approach, in this method we take into account positional correlations for the physical parameters when we are solving the inverse problem, the output being a map of the Arctic composed of a dynamically generated a tiling in terms of Voronoi cells. In that way, regions with less accurate observations will be more coarsely resolved while highly sampled regions will be provided on a finer grid with a smaller uncertainty. The main algorithm used here to calculate the posterior solution is called `reverse jump Monte Carlo Markov Chain' (hereafter referred to as rj-MCMC) and its concept was designed by Peter Green in 1999 (Green, 1995). Bodin and Sambridge (2009) adapted this algorithm for seismic inversion, which is the basis of the algorithm used in this study.  

 

How to cite: Perez Ferrer, J., Tsamados, M., Fox, M., Suciu, T., Heorton, H., and Nab, C.: Inverse modelling techniques for snow and ice thickness retrievals from satellite altimetry , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12882, https://doi.org/10.5194/egusphere-egu22-12882, 2022.

EGU22-91 | Presentations | NP4.1

The role of teleconnections in complex climate network 

Ruby Saha

A complex network provides a robust framework to statistically investigate the topology of local and long-range connections, i.e., teleconnections in climate dynamics. The Climate network is constructed from meteorological data set using the linear Pearson correlation coefficient to measure similarity between two regions. Long-range teleconnections connect remote geographical sites and are crucial for climate networks. In this study, we discuss that during El Ni\~no Southern Oscillation onset, the teleconnections pattern changes according to the episode's strength. The long-range teleconnections are significant and responsible for the episodes' extremum ONI attained gradually after onset. We quantify the betweenness centrality measurement and note that the teleconnection distribution pattern and the betweenness measurements fit well.

How to cite: Saha, R.: The role of teleconnections in complex climate network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-91, https://doi.org/10.5194/egusphere-egu22-91, 2022.

EGU22-1831 | Presentations | NP4.1

Quantifying space-weather events using dynamical network analysis of Pc waves with global ground based magnetometers. 

Shahbaz Chaudhry, Sandra Chapman, Jesper Gjerloev, Ciaran Beggan, and Alan Thompson

Geomagnetic storms can impact technological systems, on the ground and in space, including damage to satellites and power blackouts. Their impact on ground systems such as power grids depends upon the spatio-temporal extent and time-evolution of the ground magnetic perturbation driven by the storm.

Pc waves are Alfven wave resonances of closed magnetospheric field lines and are ubiquitous in the inner magnetosphere. They have been extensively studied, in particular since  Pc wave power tracks the onset and evolution of geomagnetic storms.  We study the spatial and temporal evolution of Pc waves with a network analysis of the 100+ ground-based magnetometer stations collated by the SuperMAG collaboration with a single time-base and calibration. 

Network-based analysis of 1 min cadence SuperMAG magnetometer data has been applied to the dynamics of substorm current systems (Dods et al. JGR 2015, Orr et al. GRL 2019) and the magnetospheric response to IMF turnings (Dods et al. JGR 2017). It has the potential to capture the full spatio-temporal response with a few time-dependent network parameters. Now, with the availability of 1 sec data across the entire SuperMAG network we are able for the first time to apply network analysis globally to resolve both the spatial and temporal correlation patterns of the ground signature of Pc wave activity as a geomagnetic storm evolves. We focus on Pc2 (5-10s period) and Pc3 (10-45s period) wave bands. We obtain the time-varying global Pc wave dynamical network over individual space weather events.

To construct the networks we sample each magnetometer time series with a moving window in the time domain (20 times Pc period range) and then band-pass filter each magnetometer station time-series to obtain Pc2 and Pc3 waveforms. We then compute the cross correlation (TLXC) between all stations for each Pc band. Modelling is used to determine a threshold of significant TLXC above which a pair of stations are connected in the network. The TLXC as a function of lag is tested against a criterion for sinusoidal waveforms and then used to calculate the phase difference. The connections with a TLXC peak at non zero lag form a directed network which characterizes propagation or information flow. The connections at TLXC lag peak close to zero form am undirected network which characterizes a response which is globally instantaneously coherent.

We apply this network analysis to isolated geomagnetic storms. We find that the network connectivity does not simply track Pc wave power, it therefore contains additional information. Geographically short range connections are prevalent at all times, the storm onset marks a transition to a network which has both enhancement of geographically short-range connections, and the growth of geographically long range, global scale, connections extending spatially over a region exceeding 9h MLT. These global scale connections, indicating globally coherent Pc wave response are prevalent throughout the storm with considerable (within a few time windows) variation. The stations are not uniformly distributed spatially. Therefore, we distinguish between long range connections to avoid introducing spatial correlation. 

How to cite: Chaudhry, S., Chapman, S., Gjerloev, J., Beggan, C., and Thompson, A.: Quantifying space-weather events using dynamical network analysis of Pc waves with global ground based magnetometers., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1831, https://doi.org/10.5194/egusphere-egu22-1831, 2022.

EGU22-2014 | Presentations | NP4.1

OBS noise reduction using music information retrieval algorithms 

Zahra Zali, Theresa Rein, Frank Krüger, Matthias Ohrnberger, and Frank Scherbaum

Since the ocean covers 71% of the Earth’s surface, records from ocean bottom seismometers (OBS) are essential for investigating the whole Earth’s structure. However, data from ocean bottom recordings are commonly difficult to analyze due to the high noise level especially on the horizontal components. In addition, signals of seismological interest such as earthquake recordings at teleseismic distances, are masked by the oceanic noises. Therefore, noise reduction of OBS data is an important task required for the analysis of OBS records. Different approaches have been suggested in previous studies to remove noise from vertical components successfully, however, noise reduction on records of horizontal components remained problematic. Here we introduce a method, which is based on harmonic-percussive separation (HPS) algorithms used in Zali et al., (2021) that is able to separate long-lasting narrowband signals from broadband transients in the OBS records. In the context of OBS noise reduction using HPS algorithms, percussive components correspond to earthquake signals and harmonic components correspond to noise signals. OBS noises with narrowband horizontal structures in the short time Fourier transform (STFT) are readily distinguishable from transient, short-duration seismic events with vertical exhibitions in the STFT spectrogram. Through HPS algorithms we try to separate horizontal structures from vertical structures in the STFT spectrograms. Using this method we can reduce OBS noises from both vertical and horizontal components, retrieve clearer broadband earthquake waveforms and increase the earthquake signal to noise ratio. The applicability of the method is checked through tests on synthetic and real data.

How to cite: Zali, Z., Rein, T., Krüger, F., Ohrnberger, M., and Scherbaum, F.: OBS noise reduction using music information retrieval algorithms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2014, https://doi.org/10.5194/egusphere-egu22-2014, 2022.

EGU22-2097 | Presentations | NP4.1 | Highlight

Medium- to long-term forecast of sea surface temperature using EEMD-STEOF-LSTM hybrid model 

Rixu Hao, Yuxin Zhao, Xiong Deng, Di Zhou, Dequan Yang, and Xin Jiang

Sea surface temperature (SST) is a vitally important variable of the global ocean, which can profoundly affect the climate and marine ecosystems. The field of forecasting oceanic variables has traditionally relied on numerical models, which effectively consider the discretization of the dynamical and physical oceanic equations. However, numerical models suffer from many limitations such as short timeliness, complex physical processes, and excessive calculation. Furthermore, existing machine learning has been proved to be able to capture spatial and temporal information independently without these limitations, but the previous research on multi-scale feature extraction and evolutionary forecast under spatiotemporal integration is still inadequate. To fill this gap, a multi-scale spatiotemporal forecast model is developed combining ensemble empirical mode decomposition (EEMD) and spatiotemporal empirical orthogonal function (STEOF) with long short-term memory (LSTM), which is referred to as EEMD-STEOF-LSTM. Specifically, the EEMD is applied for adaptive multi-scale analysis; the STEOF is adopted to decompose the spatiotemporal processes of different scales into terms of a sum of products of spatiotemporal basis functions along with corresponding coefficients, which captures the evolution of spatial and temporal processes simultaneously; and the LSTM is employed to achieve medium- to long-term forecast of STEOF-derived spatiotemporal coefficients. A case study of the daily average of SST in the South China Sea shows that the proposed hybrid EEMD-STEOF-LSTM model consistently outperforms the optimal climatic normal (OCN), STEOF, and STEOF-LSTM, which can accurately forecast the characteristics of oceanic eddies. Statistical analysis of the case study demonstrates that this model has great potential for practical applications in medium- to long-term forecast of oceanic variables.

How to cite: Hao, R., Zhao, Y., Deng, X., Zhou, D., Yang, D., and Jiang, X.: Medium- to long-term forecast of sea surface temperature using EEMD-STEOF-LSTM hybrid model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2097, https://doi.org/10.5194/egusphere-egu22-2097, 2022.

In this presentation, we introduce the IMFogram method ( pronounced like "infogram" ), which is a new, fast, local, and reliable time-frequency representation (TFR) method for nonstationary signals. This technique is based on the Intrinsic Mode Functions (IMFs) decomposition produced by a decomposition method, like the Empirical Mode Decomposition-based techniques, Iterative Filtering-based algorithms, or any equivalent method developed so far. We present the mathematical properties of the IMFogram, and show the proof that this method is a generalization of the Spectrogram. We conclude the presentation with some applications, as well as a comparison of its performance with other existing TFR techniques.

How to cite: Cicone, A.: The IMFogram: a new time-frequency representation algorithm for nonstationary signals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2560, https://doi.org/10.5194/egusphere-egu22-2560, 2022.

EGU22-2922 | Presentations | NP4.1

Constraining the uncertainty in CO2 seasonal cycle metrics by residual bootstrapping. 

Theertha Kariyathan, Wouter Peters, Julia Marshall, Ana Bastos, and Markus Reichstein

The analysis of long, high-quality time series of atmospheric greenhouse gas measurements helps to quantify their seasonal to interannual variations and impact on global climate. These discrete measurement records contain, however, gaps and at times noisy data, influenced by local fluxes or synoptic scale events, hence appropriate filtering and curve-fitting techniques are often used to smooth and gap-fill the atmospheric time series. Previous studies have shown that there is an inherent uncertainty associated with curve-fitting processes which introduces biases based on the choice of mathematical method used for data processing and can lead to scientific misinterpretation of the signal. Further the uncertainties in curve-fitting can be propagated onto the metrics estimated from the fitted curve that could significantly influence the quantification of the metrics and their interpretations. In this context we present a novel-methodology for constraining the uncertainty arising from fitting a smooth curve to the CO2 dry air mole fraction time-series, and propagate this uncertainty onto commonly used metrics to study the seasonal cycle of CO2. We generate an ensemble of fifitted curves from the data using residual bootstrap sampling with loess-fitted residuals, that is representative of the inherent uncertainty in applying the curve-fitting method to the discrete data. The spread of the selected CO2 seasonal cycle metrics across bootstrap time-series provides an estimate of the inherent uncertainty in curve fitting to the discrete data. Further we show that the approach can be extended to other curve-fitting methods by generating multiple bootstrap samples by resampling residuals obtained from processing the data using the widely used CCGCRV filtering method by the atmospheric greenhouse gas measurement community.

How to cite: Kariyathan, T., Peters, W., Marshall, J., Bastos, A., and Reichstein, M.: Constraining the uncertainty in CO2 seasonal cycle metrics by residual bootstrapping., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2922, https://doi.org/10.5194/egusphere-egu22-2922, 2022.

EGU22-4795 | Presentations | NP4.1

Robust Causal Inference for Irregularly Sampled Time Series: Applications in Climate and Paleoclimate Data Analysis 

Aditi Kathpalia, Pouya Manshour, and Milan Paluš

To predict and determine the major drivers of climate has become even more important now as climate change poses a big challenge to humankind and our planet earth. Different studies employ either correlation, causality methods or modelling approaches to study the interaction between climate and climate forcing variables (anthropogenic or natural). This includes the study of interaction between global surface temperatures and CO2; rainfall in different locations and El Niño–Southern Oscillation (ENSO) phenomena. The results produced by different studies have been found to be different and debatable, presenting an ambiguous situation. In this work, we develop and apply a novel robust causality estimation technique for time-series data (to estimate causal influence between given observables), that can help to resolve the ambiguity. The discrepancy in existing results arises due to challenges with the acquired data and limitations of the causal inference/ modelling approaches. Our novel approach combines the use of a recently proposed causality method, Compression-Complexity Causality (CCC) [1], and Ordinal/ Permutation pattern-based coding [2]. CCC estimates have been shown to be robust for bivariate systems with low temporal resolution, missing samples, long-term memory and finite length data [1]. The use of ordinal patterns helps to extend bivariate CCC to the multivariate case by capturing the multidimensional dynamics of the given variables’ systems in the symbolic temporal sequence of a single variable. This methodology is tested on dynamical systems data which are short in length and have been corrupted with missing samples or subsampled to different levels. The superior performance of ‘Permutation CCC’ on such data relative to other causality estimation methods, strengthens our trust in the method. We apply the method to study the interaction between CO2-temperature recordings on three different time scales, CH4-temperature on the paleoclimate scale, ENSO-South Asian monsoon on monthly and yearly time scales, North Atlantic Oscillation-surface temperature on daily and monthly time scales. These datasets are either short in length, have been sampled irregularly, have missing samples or have a combination of the above factors. Our results are interesting, which validate some existing studies while contradicting others. In addition, the development of the novel permutation-CCC approach opens the possibility of its application for making useful inferences on other challenging climate datasets.


This study is supported by the Czech Science Foundation, Project No.~GA19-16066S and by the Czech Academy of Sciences, Praemium Academiae awarded to M. Paluš.


References:
[1] Kathpalia, A., & Nagaraj, N. (2019). Data-based intervention approach for Complexity-Causality measure. PeerJ Computer Science, 5, e196.
[2] Bandt, C., & Pompe, B. (2002). Permutation entropy: a natural complexity measure for time series. Physical review letters, 88(17), 174102.

How to cite: Kathpalia, A., Manshour, P., and Paluš, M.: Robust Causal Inference for Irregularly Sampled Time Series: Applications in Climate and Paleoclimate Data Analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4795, https://doi.org/10.5194/egusphere-egu22-4795, 2022.

Rainfall time series prediction is crucial for geoscientific system monitoring, but it is challenging and complex due to the extreme variability of rainfall. In order to improve prediction accuracy, a hybrid deep learning model (VMD-RNN) was proposed. In this study, variational mode decomposition (VMD) is first applied to decompose the original rainfall time series into several sub-sequences according to the frequency domain. Following that, different recurrent neural network (RNN) models are utilized to predict individual sub-sequences and the final prediction is reconstructed by summing the prediction results of sub-sequences. These RNN models are long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM) and bidirectional GRU (BiGRU), which are optimal for sequence prediction. The root mean square error (RMSE) of the predicted performance is then used to select the ideal RNN model for each sub-sequences. In addition to RMSE, the framework of universal multifractal (UM) is also introduced to evaluate prediction performances, which enables to characterize the extreme variability of predicted rainfall time series. The study employed two rainfall datasets from 2001 to 2020 in Paris, with daily and hourly resolutions. The results show that, when compared to directly predicting the original time series, the proposed hybrid VMD-RNN model improves prediction of high or extreme values for the daily dataset, but does not significantly enhance the prediction of zero or low values. Additionally, the VMD-RNN model also outperforms existing deep learning models without decomposition on the hourly dataset when evaluated with the help of RMSE, while universal multifractal analyses point out limitations. 

How to cite: Zhou, H., Schertzer, D., and Tchiguirinskaia, I.: Combining variational mode decomposition and recurrent neural network to predict rainfall time series and evaluating prediction performance by universal multifractals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6014, https://doi.org/10.5194/egusphere-egu22-6014, 2022.

EGU22-6281 | Presentations | NP4.1

Application of information theoretical measures for improved machine learning modelling of the outer radiation belt 

Constantinos Papadimitriou, Georgios Balasis, Ioannis A. Daglis, and Simon Wing

In the past ten years Artificial Neural Networks (ANN) and other machine learning methods have been used in a wide range of models and predictive systems, to capture and even predict the onset and evolution of various types of phenomena. These applications typically require large datasets, composed of many variables and parameters, the number of which can often make the analysis cumbersome and prohibitively time consuming, especially when the interplay of all these parameters is taken into consideration. Thankfully, Information-Theoretical measures can be used to not only reduce the dimensionality of the input space of such a system, but also improve its efficiency. In this work, we present such a case, where differential electron fluxes from the Magnetic Electron Ion Spectrometer (MagEIS) on board the Van Allen Probes satellites are modelled by a simple ANN, using solar wind parameters and geomagnetic activity indices as inputs, and illustrate how the proper use of Information Theory measures can improve the efficiency of the model by minimizing the number of input parameters and shifting them with respect to time, to their proper time-lagged versions.

How to cite: Papadimitriou, C., Balasis, G., Daglis, I. A., and Wing, S.: Application of information theoretical measures for improved machine learning modelling of the outer radiation belt, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6281, https://doi.org/10.5194/egusphere-egu22-6281, 2022.

EGU22-7256 | Presentations | NP4.1

Identifying patterns of teleconnections, a curvature-based network analysis 

Jakob Schlör, Felix M. Strnad, Christian Fröhlich, and Bedartha Goswami

Representing spatio-temporal climate variables as complex networks allows uncovering nontrivial structure in the data. Although various tools for detecting communities in climate networks have been used to group nodes (spatial locations) with similar climatic conditions, we are often interested in identifying important links between communities. Of particular interest are methods to detect teleconnections, i.e. links over large spatial distances mitigated by atmospheric processes.

We propose to use a recently developed network measure based on Ricci-curvature to visualize teleconnections in climate networks. Ricci-curvature allows to distinguish between- and within-community links in networks. Applied to networks constructed from surface temperature anomalies we show that Ricci-curvature separates spatial scales. We use Ricci-curvature to study differences in global teleconnection patterns of different types of El Niño events, namely the Eastern Pacific (EP) and Central Pacific (CP) types. Our method reveals a global picture of teleconnection patterns, showing confinement of teleconnections to the tropics under EP conditions but showing teleconnections to the tropics, Northern and Southern Hemisphere under CP conditions. The obtained teleconnections corroborate previously reported impacts of EP and CP.
Our results suggest that Ricci-curvature is a promising visual-analytics-tool to study the topology of climate systems with potential applications across observational and model data.

How to cite: Schlör, J., Strnad, F. M., Fröhlich, C., and Goswami, B.: Identifying patterns of teleconnections, a curvature-based network analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7256, https://doi.org/10.5194/egusphere-egu22-7256, 2022.

EGU22-8399 | Presentations | NP4.1

Using neural networks to detect coastal hydrodynamic phenomena in high-resolution tide gauge data 

Felix Soltau, Sebastian Niehüser, and Jürgen Jensen

Tide gauges are exposed to various kinds of influences that are able to affect water level measurements significantly and lead to time series containing different phenomena and artefacts. These influences can be natural or anthropogenic, while both lead to actual changes of the water level. Opposed to that, technical malfunction of measuring devices as another kind of influence causes non-physical water level data. Both actual and non-physical data need to be detected and classified consistently, and possibly corrected to enable the supply of adequate water level information. However, there is no automatically working detection algorithm yet. Only obvious or frequent technical malfunctions like gaps can be detected automatically but have to be corrected manually by trained staff. Consequently, there is no consistently defined data pre-processing before, for example, statistical analyses are performed or water level information for navigation is passed on.

In the research project DePArT*, we focus on detecting natural phenomena like standing waves, meteotsunamis, or inland flood events as well as anthropogenic artefacts like operating storm surge barriers and sluices in water level time series containing data every minute. Therefore, we train artificial neural networks (ANNs) using water level sequences of phenomena and artefacts as well as redundant data to recognize them in other data sets. We use convolutional neural networks (CNNs) as they already have been successfully conducted in, for example, object detection or speech and language processing (Gu et al., 2018). However, CNNs need to be trained with high numbers of sample sequences. Hence, as a next step the idea is to synthesize rarely observed phenomena and artefacts to gain enough training data. The trained CNNs can then be used to detect unnoticed phenomena and artefacts in past and recent time series. Depending on sequence characteristics and the results of synthesizing, we will possibly be able to detect certain events as they occur and therefore provide pre-checked water level information in real time.

In a later stage of this study, we will implement the developed algorithms in an operational test mode while cooperating closely with the officials to benefit from the mutual feedback. In this way, the study contributes to a future consistent pre-processing and helps to increase the quality of water level data. Moreover, the results are able to reduce uncertainties from the measuring process and improve further calculations based on these data.

* DePArT (Detektion von küstenhydrologischen Phänomenen und Artefakten in minütlichen Tidepegeldaten; engl. Detection of coastal hydrological phenomena and artefacts in minute-by-minute tide gauge data) is a research project, funded by the German Federal Ministry of Education and Research (BMBF) through the project management of Projektträger Jülich PTJ under the grant number 03KIS133.

Gu, Wang, Kuen, Ma, Shahroudy, Shuai, Liu, Wang, Wang, Cai, Chen (2018): Recent advances in convolutional neural networks. In: Pattern Recognition, Vol. 77, Pages 354–377. https://doi.org/10.1016/j.patcog.2017.10.013

How to cite: Soltau, F., Niehüser, S., and Jensen, J.: Using neural networks to detect coastal hydrodynamic phenomena in high-resolution tide gauge data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8399, https://doi.org/10.5194/egusphere-egu22-8399, 2022.

EGU22-8899 | Presentations | NP4.1

Body wave extraction by using sparsity-promoting time-frequency filtering 

Bahare Imanibadrbani, Hamzeh Mohammadigheymasi, Ahmad Sadidkhouy, Rui Fernandes, Ali Gholami, and Martin Schimmel

Different phases of seismic waves generated by earthquakes carry considerable information about the subsurface structures as they propagate within the earth. Depending on the scope and objective of an investigation, various types of seismic phases are studied. Studying surface waves image shallow and large-scale subsurface features, while body waves provide high-resolution images at higher depths, which is otherwise impossible to be resolved by surface waves. The most challenging aspect of studying body waves is extracting low-amplitude P and S phases predominantly masked by high amplitude and low attenuation surface waves overlapping in time and frequency. Although body waves generally contain higher frequencies than surface waves, the overlapping frequency spectrum of body and surface waves limits the application of elementary signal processing methods such as conventional filtering. Advanced signal processing tools are required to work around this problem. Recently the Sparsity-Promoting Time-Frequency Filtering (SP-TFF) method was developed as a signal processing tool for discriminating between different phases of seismic waves based on their high-resolution polarization information in the Time-Frequency (TF)-domain (Mohammadigheymasi et al., 2022). The SP-TFF extracts different phases of seismic waves by incorporating this information and utilizing a combination of amplitude, directivity, and rectilinearity filters. This study implements SP-TFF by properly defining a filter combination set for specific extraction of body waves masked by high-amplitude surface waves. Synthetic and real data examinations for the source mechanism of the  Mw=7.5 earthquake that occurred in November 2021 in Northern Peru and recorded by 58 stations of the United States National Seismic Network (USNSN) is conducted. The results show the remarkable performance of SP-TFF extracting P and SV phases on the vertical and radial components and SH phase on the transverse component masked by high amplitude Rayleigh and Love waves, respectively. A range of S/N levels is tested, indicating the algorithm’s robustness at different noise levels. This research contributes to the FCT-funded SHAZAM (Ref. PTDC/CTA-GEO/31475/2017) and IDL (Ref. FCT/UIDB/50019/2020) projects. It also uses computational resources provided by C4G (Collaboratory for Geosciences) (Ref. PINFRA/22151/2016).

REFERENCE
Mohammadigheymasi, H., P. Crocker, M. Fathi, E. Almeida, G. Silveira, A. Gholami, and M. Schimmel, 2022, Sparsity-promoting approach to polarization analysis of seismic signals in the time-frequency domain: IEEE Transactions on Geoscience and Remote Sensing, 1–1.

How to cite: Imanibadrbani, B., Mohammadigheymasi, H., Sadidkhouy, A., Fernandes, R., Gholami, A., and Schimmel, M.: Body wave extraction by using sparsity-promoting time-frequency filtering, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8899, https://doi.org/10.5194/egusphere-egu22-8899, 2022.

EGU22-9626 | Presentations | NP4.1

A Recurrence Flow based Approach to Attractor Reconstruction 

Tobias Braun, K. Hauke Kraemer, and Norbert Marwan

In the study of nonlinear observational time series, reconstructing the system’s state space represents the basis for many widely-used analyses. From the perspective of dynamical system’s theory, Taken’s theorem states that under benign conditions, the reconstructed state space preserves the most fundamental properties of the real, unknown system’s attractor. Through many applications, time delay embedding (TDE) has established itself as the most popular approach for state space reconstruction1. However, standard TDE cannot account for multiscale properties of the system and many of the more sophisticated approaches either require heuristic choice for a high number of parameters, fail when the signals are corrupted by noise or obstruct analysis due to their very high complexity.

We present a novel semi-automated, recurrence based method for the problem of attractor reconstruction. The proposed method is based on recurrence plots (RPs), a computationally simple yet effective 2D-representation of a univariate time series. In a recent study, the quantification of RPs has been extended by transferring the well-known box-counting algorithm to recurrence analysis2. We build on this novel formalism by introducing another box-counting measure that was originally put forward by B. Mandelbrot, namely succolarity3. Succolarity quantifies how well a fluid can permeate a binary texture4. We employ this measure by flooding a RP with a (fictional) fluid along its diagonals and computing succolarity as a measure of diagonal flow through the RP. Since a non-optimal choice of embedding parameters impedes the formation of diagonal lines in the RP and generally results in spurious patterns that block the fluid, the attractor reconstruction problem can be formulated as a maximization of diagonal recurrence flow.

The proposed state space reconstruction algorithm allows for non-uniform embedding delays to account for multiscale dynamics. It is conceptually and computationally simple and (nearly) parameter-free. Even in presence of moderate to high noise intensity, reliable results are obtained. We compare the method’s performance to existing techniques and showcase its effectiveness in applications to paradigmatic examples and nonlinear geoscientific time series.

 

References:

1 Packard, N. H., Crutchfield, J. P., Farmer, J. D., & Shaw, R. S. (1980). Geometry from a time series. Physical review letters, 45(9), 712.

2 Braun, T., Unni, V. R., Sujith, R. I., Kurths, J., & Marwan, N. (2021). Detection of dynamical regime transitions with lacunarity as a multiscale recurrence quantification measure. Nonlinear Dynamics, 1-19.

3 Mandelbrot, B. B. (1982). The fractal geometry of nature (Vol. 1). New York: WH freeman.

4 de Melo, R. H., & Conci, A. (2013). How succolarity could be used as another fractal measure in image analysis. Telecommunication Systems, 52(3), 1643-1655.

How to cite: Braun, T., Kraemer, K. H., and Marwan, N.: A Recurrence Flow based Approach to Attractor Reconstruction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9626, https://doi.org/10.5194/egusphere-egu22-9626, 2022.

EGU22-11064 | Presentations | NP4.1

The Objective Deformation Component of a Velocity Field 

Bálint Kaszás, Tiemo Pedergnana, and George Haller

According to a fundamental axiom of continuum mechanics, material response should be objective, i.e., indifferent to the observer. In the context of geophysical fluid dynamics, fluid-transporting vortices must satisfy this axiom and hence different observers should come to the same conclusion about the location and size of these vortices. As a consequence, only objectively defined extraction methods can provide reliable results for material vortices.

As velocity fields are inherently non-objective, they render most Eulerian flow-feature detection non-objective. To resolve this issue,  we discuss a general decomposition of a velocity field into an objective deformation component and a rigid-body component. We obtain this decomposition as a solution of a physically motivated extremum problem for the closest rigid-body velocity of a general velocity field.

This extremum problem turns out to have a unique,  physically interpretable,  closed-form solution. Subtracting this solution from the velocity field then gives an objective deformation velocity field that is also physically observable. As a consequence, all common Eulerian feature detection schemes, as well as the momentum, energy, vorticity, enstrophy, and helicity of the flow, become objective when computed from the deformation velocity component. We illustrate the use of this deformation velocity field on several velocity data sets.

How to cite: Kaszás, B., Pedergnana, T., and Haller, G.: The Objective Deformation Component of a Velocity Field, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11064, https://doi.org/10.5194/egusphere-egu22-11064, 2022.

EGU22-11118 | Presentations | NP4.1

Explainable community detection of extreme rainfall events using the tangles algorithmic framework 

Merle Kammer, Felix Strnad, and Bedartha Goswami

Climate networks have helped to uncover complex structures in climatic observables from large time series data sets. For instance, climate networks were used to reduce rainfall data to relevant patterns that can be linked to geophysical processes. However, the identification of regions that show similar behavior with respect to the timing and spatial distribution of extreme rainfall events (EREs) remains challenging. 
To address this, we apply a recently developed algorithmic framework based on tangles [1] to discover community structures in the spatial distribution of EREs and to obtain inherently interpretable communities as an output. First, we construct a climate network using time-delayed event synchronization and create a collection of cuts (bipartitions) from the EREs data. By using these cuts, the tangles algorithmic framework allows us to both exploit the climate network structure and incorporate prior knowledge from the data. Applying tangles enables us to create a hierarchical tree representation of communities including the likelihood that spatial locations belong to a community. Each tree layer can be associated to an underlying cut, thus making the division of different communities transparent. 
Applied to global precipitation data, we show that tangles is a promising tool to quantify community structures and to reveal underlying geophysical processes leading to these structures.

 

[1] S. Klepper, C. Elbracht, D. Fioravanti,  J. Kneip, L. Rendsburg, M. Teegen, and U. von Luxburg. Clustering with Tangles: Algorithmic Framework and Theoretical Guarantees. CoRR, abs/2006.14444v2, 2021. URL https://arxiv.org/abs/2006.14444v2.

How to cite: Kammer, M., Strnad, F., and Goswami, B.: Explainable community detection of extreme rainfall events using the tangles algorithmic framework, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11118, https://doi.org/10.5194/egusphere-egu22-11118, 2022.

EGU22-11667 | Presentations | NP4.1

Spurious Behaviour in Networks from Spatio-temporal Data 

Moritz Haas, Bedartha Goswami, and Ulrike von Luxburg

Network-based analyses of dynamical systems have become increasingly popular in climate science. Instead of focussing on the chaotic systems aspect, we come from a statistical perspective and highlight the often ignored fact that the calculated correlation values are only empirical estimates. We find that already the uncertainty stemming from the estimation procedure has major impact on network characteristics. Using isotropic random fields on the sphere, we observe spurious behaviour in commonly constructed networks from finite samples. When the data has locally coherent correlation structure, even spurious link-bundle teleconnections have to be expected. We reevaluate the outcome and robustness of existing studies based on their design choices and null hypotheses.

How to cite: Haas, M., Goswami, B., and von Luxburg, U.: Spurious Behaviour in Networks from Spatio-temporal Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11667, https://doi.org/10.5194/egusphere-egu22-11667, 2022.

EGU22-12351 | Presentations | NP4.1

VAE4OBS: Denoising ocean bottom seismograms using variational autoencoders 

Maria Tsekhmistrenko, Ana Ferreira, Kasra Hosseini, and Thomas Kitching

Data from ocean-bottom seismometers (OBS) are inherently more challenging than their land counterpart because of their noisy environment. Primary and secondary microseismic noises corrupt the recorded time series. Additionally, anthropogenic (e.g., ships) and animal noise (e.g., Whales) contribute to a complex noise that can make it challenging to use traditional filtering methods (e.g., broadband or Gabor filters) to clean and extract information from these seismograms. 

OBS deployments are laborious, expensive, and time-consuming. The data of these deployments are crucial in investigating and covering the "blind spots" where there is a lack of station coverage. It, therefore, becomes vital to remove the noise and retrieve earthquake signals recorded on these seismograms.

We propose analysing and processing such unique and challenging data with Machine Learning (ML), particularly Deep Learning (DL) techniques, where conventional methods fail. We present a variational autoencoder (VAE) architecture to denoise seismic waveforms with the aim to extract more information than previously possible. We argue that, compared to other fields, seismology is well-posed to use ML and DL techniques thanks to massive datasets recorded by seismograms. 

In the first step, we use synthetic seismograms (generated with Instaseis) and white noise to train a deep neural network. We vary the signal-to-noise ratio during training. Such synthetic datasets have two advantages. First, we know the signal and noise (as we have injected the noise ourselves). Second, we can generate large training and validation datasets, one of the prerequisites for high-quality DL models.

Next, we increased the complexity of input data by adding real noise sampled from land and OBS to the synthetic seismograms. Finally, we apply the trained model to real OBS data recorded during the RHUM-RUM experiment.

We present the workflow, the neural network architecture, our training strategy, and the usefulness of our trained models compared to traditional methods.

How to cite: Tsekhmistrenko, M., Ferreira, A., Hosseini, K., and Kitching, T.: VAE4OBS: Denoising ocean bottom seismograms using variational autoencoders, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12351, https://doi.org/10.5194/egusphere-egu22-12351, 2022.

EGU22-13053 | Presentations | NP4.1

Causal Diagnostics for Observations - Experiments with the L63 system 

Nachiketa Chakraborty and Javier Amezcua

Study of cause and effect relationships – causality - is central to identifying mechanisms that cause the phenomena we observe. And in non-linear, dynamical systems, we wish to understand these mechanisms unfolding over time. In areas within physical sciences like geosciences, astrophysics, etc. there are numerous competing causes that drive the system in complicated ways that are hard to disentangle. Hence, it is important to demonstrate how causal attribution works with relatively simpler systems where we have a physical intuition. Furthermore, in earth and atmospheric sciences or meteorology, we have a plethora of observations that are used in both understanding the underlying science beneath the phenomena as well as forecasting. However in order to do this, optimally combining the models (theoretical/numerical) with the observations through data assimilation is a challenging, computationally intensive task. Therefore, understanding the impact of observations and the required cadence is very useful. Here, we present experiments in causal inference and attribution with the Lorenz 63 system – a system studied for a long time. We first test the causal relations between the variables characterising the model. And then we simulate observations using perturbed versions of the model to test the impact of the cadence of observations of each combination of the 3 variables.

How to cite: Chakraborty, N. and Amezcua, J.: Causal Diagnostics for Observations - Experiments with the L63 system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13053, https://doi.org/10.5194/egusphere-egu22-13053, 2022.

An accurate understanding of dynamical similarities and dissimilarities in geomagnetic variability between quiet and disturbed periods has the potential to vastly improve Space Weather diagnosis. During the last years, several approaches rooted in dynamical system theory have demonstrated their great potentials for characterizing the instantaneous level of complexity in geomagnetic activity and solar wind variations, and for revealing indications of intermittent large-scale coupling and generalized synchronization phenomena in the Earth’s electromagnetic environment. In this work, we focus on two complementary approaches based on the concept of recurrences in phase space, both of which quantify subtle geometric properties of the phase space trajectory instead of taking an explicit temporal variability perspective. We first quantify the local (instantaneous) and global fractal dimensions and associated local stability properties of a suite of low (SYM-H, ASY-H) and high latitude (AE, AL, AU) geomagnetic indices and discuss similarities and dissimilarities of the obtained patterns for one year of observations during a solar activity maximum. Subsequently, we proceed with studying bivariate extensions of both approaches, and demonstrate their capability of tracing different levels of interdependency between low and high latitude geomagnetic variability during periods of magnetospheric quiescence and along with perturbations associated with geomagnetic storms and magnetospheric substorms, respectively. Ultimately, we investigate the effect of time scale on the level of dynamical organization of fluctuations by studying iterative reconstructions of the index values based on intrinsic mode functions obtained from univariate and multivariate versions of empirical mode decomposition. Our results open new perspectives on the nonlinear dynamics and (likely intermittent) mutual entanglement of different parts of the geospace electromagnetic environment, including the equatorial and westward auroral electrojets, in dependence of the overall state of the geospace system affected by temporary variations of the solar wind forcing. In addition, they contribute to a better understanding of the potentials and limitations of two contemporary approaches of nonlinear time series analysis in the field of space physics.

How to cite: Donner, R., Alberti, T., and Faranda, D.: Instantaneous fractal dimensions and stability properties of geomagnetic indices based on recurrence networks and extreme value theory, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13342, https://doi.org/10.5194/egusphere-egu22-13342, 2022.

EGU22-986 | Presentations | HS3.6

Quantifying solute transport numerical dispersion in integrated surface-subsurface hydrological modeling 

Beatrice Gatto, Claudio Paniconi, Paolo Salandin, and Matteo Camporese

Numerical dispersion is a well-known problem that affects solute transport in groundwater simulations and can lead to wrong results, in terms of plume path overestimation and overprediction of contaminant dispersion. Numerical dispersion is generally introduced through stabilization techniques aimed at preventing oscillations, with the side effect of increasing mass spreading. Even though this issue has long been investigated in subsurface hydrology, little is known about its possible impacts on integrated surface–subsurface hydrological models (ISSHMs). In this study, we analyze numerical dispersion in the CATchment HYdrology (CATHY) model. In CATHY, a robust and computationally efficient time-splitting technique is implemented for the solution of the subsurface transport equation, whereby the advective part is solved on elements with an explicit finite volume scheme and the dispersive part is solved on nodes with an implicit finite element scheme. Taken alone, the advection and dispersion solvers provide accurate results. However, when coupled, the continuous transfer of concentration from elements to nodes, and vice versa, gives rise to a particular form of numerical dispersion. We assess the nature and impact of this artificial spreading through two sets of synthetic experiments. In the first set, the subsurface transport of a nonreactive tracer in two soil column test cases is simulated and compared with known analytical solutions. Different input dispersion coefficients and mesh discretizations are tested, in order to quantify the numerical error and define a criterion for its containment. In the second set of experiments, fully coupled surface–subsurface processes are simulated using two idealized hillslopes, one concave and one convex, and we examine how the additional subsurface dispersion affects the representation of pre-event water contribution to the streamflow hydrograph. Overall, we show that the numerical dispersion in CATHY that is caused by the transfer of information between elements and nodes can be kept under control if the grid Péclet number is less than 1. It is also suggested that the test cases used in this study can be useful benchmarks for integrated surface–subsurface hydrological models, for which thus far only flow benchmarks have been proposed.

How to cite: Gatto, B., Paniconi, C., Salandin, P., and Camporese, M.: Quantifying solute transport numerical dispersion in integrated surface-subsurface hydrological modeling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-986, https://doi.org/10.5194/egusphere-egu22-986, 2022.

EGU22-1210 | Presentations | HS3.6

An alternative strategy for combining likelihood values in Bayesian calibration to improve model predictions 

Michelle Viswanathan, Tobias K. D. Weber, and Anneli Guthke

Conveying uncertainty in model predictions is essential, especially when these predictions are used for decision-making. Models are not only expected to achieve the best possible fit to available calibration data but to also capture future observations within realistic uncertainty intervals. Model calibration using Bayesian inference facilitates the tuning of model parameters based on existing observations, while accounting for uncertainties. The model is tested against observed data through the likelihood function which defines the probability of the data being generated by the given model and its parameters. Inference of most plausible parameter values is influenced by the method used to combine likelihood values from different observation data sets. In the classical method of combining likelihood values, referred to here as the AND calibration strategy, it is inherently assumed that the given model is true (error-free), and that observations in different data sets are similarly informative for the inference problem. However, practically every model applied to real-world case studies suffers from model-structural errors that are typically dynamic, i.e., they vary over time. A requirement for the imperfect model to fit all data sets simultaneously will inevitably lead to an underestimation of uncertainty due to a collapse of the resulting posterior parameter distributions. Additionally, biased 'compromise solutions' to the parameter estimation problem result in large prediction errors that impair subsequent conclusions. 
    
We present an alternative AND/OR calibration strategy which provides a formal framework to relax posterior predictive intervals and minimize posterior collapse by incorporating knowledge about similarities and differences between data sets. As a case study, we applied this approach to calibrate a plant phenology model (SPASS) to observations of the silage maize crop grown at five sites in southwestern Germany between 2010 and 2016. We compared model predictions of phenology on using the classical AND calibration strategy with those from two scenarios (OR and ANDOR) in the AND/OR strategy of combining likelihoods from the different data sets. The OR scenario represents an extreme contrast to the AND strategy as all data sets are assumed to be distinct, and the model is allowed to find individual good fits to each period adjusting to the individual type and strength of model error. The ANDOR scenario acts as an intermediate solution between the two extremes by accounting for known similarities and differences between data sets, and hence grouping them according to anticipated type and strength of model error. 
    
We found that the OR scenario led to lower precision but higher accuracy of prediction results as compared to the classical AND calibration. The ANDOR scenario led to higher accuracy as compared to the AND strategy and higher precision as compared to the OR scenario. Our proposed approach has the potential to improve the prediction capability of dynamic models in general, by considering the effect of model error when calibrating to different data sets.

How to cite: Viswanathan, M., Weber, T. K. D., and Guthke, A.: An alternative strategy for combining likelihood values in Bayesian calibration to improve model predictions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1210, https://doi.org/10.5194/egusphere-egu22-1210, 2022.

EGU22-1459 | Presentations | HS3.6

Modelling decisions: a quantification of their influence on model results 

Janneke Remmers, Ryan Teuling, and Lieke Melsen

Scientific hydrological modellers make multiple decisions during the modelling process, e.g. related to the calibration period and temporal resolution. These decisions affect the model results. Modelling decisions can refer to several steps in the modelling process. In this study, modelling decisions refer to the decisions made during the whole modelling process, beyond the definition of the model structure. This study is based on an analysis of interviews with scientific hydrological modellers, thus taking actual practices into account. Six modelling decisions were identified from the interviews, which are mainly motivated by personal and team experience (calibration method, calibration period, parameters to calibrate, pre-processing of input data, spin-up period, and temporal resolution). Different options for these six decisions, as encountered in the interviews, were implemented and evaluated in a controlled modelling environment, in our case the modular modelling framework Raven, to quantify their impact on model output. The variation in the results is analysed using three hydrological signatures to determine which decisions affect the results and how they affect the results. Each model output is a hypothesis of the reality; it is an interpretation of the real system underpinned by scientific reasoning and/or expert knowledge. Currently, there is a lack of knowledge and understanding about which modelling decisions are taken and why they are taken. Consequently, the influence of modelling decisions is unknown. Quantifying this influence, which was done in this study, can raise awareness among scientists. This study pinpoints what aspects are important to consider in studying modelling decisions, and can be an incentive to clarify and improve modelling procedures.

How to cite: Remmers, J., Teuling, R., and Melsen, L.: Modelling decisions: a quantification of their influence on model results, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1459, https://doi.org/10.5194/egusphere-egu22-1459, 2022.

EGU22-1639 | Presentations | HS3.6

Rigorous Exploration of Complex Environmental Models to Advance Scientific Understanding 

Robert Reinecke, Francesca Pianosi, and Thorsten Wagener

Environmental models are central for advancing science by increasingly serving as a digital twin of the earth and its components. They allow us to conduct experiments to test hypotheses and understand dominant processes that are infeasible to do in the real world. To foster our knowledge, we build increasingly complex models hoping that they become more complete and realistic images of the real world. However, we believe that our scientific progress is slowed down as methods for the rigorous exploration of these models, in the face of unavoidable data- and epistemic-uncertainties, do not evolve in a similar manner.

Based on an extensive literature review, we show that even though methods for such rigorous exploration of model responses, e.g., global sensitivity analysis methods, are well established, there is an upper boundary to which level of model complexity they are applied today. Still, we claim that the potential for their utilization in a wider context is significant.

We argue here that a key issue to consider in this context is the framing of the sensitivity analysis problem. We show, using published examples, how problem framing defines the outcome of a sensitivity analysis in the context of scientific advancement. Without appropriate framing, sensitivity analysis of complex models reduces to a diagnostic analysis of the model, with only limited transferability of the conclusions to the real-world system.

How to cite: Reinecke, R., Pianosi, F., and Wagener, T.: Rigorous Exploration of Complex Environmental Models to Advance Scientific Understanding, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1639, https://doi.org/10.5194/egusphere-egu22-1639, 2022.

We propose a method to analyse, classify and compare dynamical systems of arbitrary dimension by the two key features uncertainty and complexity. It starts by subdividing the system’s time-trajectory into a number of time slices. For all values in a time slice, the Shannon information entropy is calculated, measuring within-slice variability. System uncertainty is then expressed by the mean entropy of all time slices. We define system complexity as “uncertainty about uncertainty”, and express it by the entropy of the entropies of all time slices. Calculating and plotting uncertainty u and complexity c for many different numbers of time slices yields the c-u-curve. Systems can be analysed, compared and classified by the c-u-curve in terms of i) its overall shape, ii) mean and maximum uncertainty, iii) mean and maximum complexity, and iv) its characteristic time scale expressed by the width of the time slice for which maximum complexity occurs. We demonstrate the method at the example of both synthetic and real-world time series (constant, random noise, Lorenz attractor, precipitation and streamflow) and show that conclusions drawn from the c-u-curve are in accordance with expectations. The method is based on unit-free probabilities and therefore permits application to and comparison of arbitrary data. It naturally expands from single- to multivariate systems, and from deterministic to probabilistic value representations, allowing e.g. application to ensemble model predictions. 

How to cite: Ehret, U. and Dey, P.: c-u-curve: A method to analyze, classify and compare dynamical systems by uncertainty and complexity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1742, https://doi.org/10.5194/egusphere-egu22-1742, 2022.

EGU22-1870 | Presentations | HS3.6

Inference of (geostatistical) hyperparameters with the correlated pseudo-marginal method 

Lea Friedli, Niklas Linde, David Ginsbourger, Alejandro Fernandez Visentini, and Arnaud Doucet

We consider non-linear Bayesian inversion problems to infer the (geostatistical) hyperparameters of a random field describing (hydro)geological or geophysical properties by inversion of hydrogeological or geophysical data. This problem is of particular importance in the non-ergodic setting as no analytical upscaling relationships exist linking the data (resulting from a specific field realization) to the hyperparameters specifying the spatial distribution of the underlying random field (e.g., mean, standard deviation, and integral scales). Jointly inferring the hyperparameters and the "true" realization of the field (typically involving many thousands of unknowns) brings important computational challenges, such that in practice, simplifying model assumptions (such as homogeneity or ergodicity) are made. To prevent the errors resulting from such simplified assumptions while circumventing the burden of high-dimensional full inversions, we use a pseudo-marginal Metropolis-Hastings algorithm that treats the random field as a latent variable. In this random effect model, the intractable likelihood of observing the hyperparameters given the data is estimated by Monte Carlo averaging over realizations of the random field. To increase the efficiency of the method, low-variance approximations of the likelihood ratio are ensured by correlating the samples used in the proposed and current steps of the Markov chain and by using importance sampling. We assess the performance of this correlated pseudo-marginal method to the problem of inferring the hyperparameters of fracture aperture fields using borehole ground-penetrating radar (GPR) reflection data. We demonstrate that the correlated pseudo-marginal method bypasses the computational challenges of a very high-dimensional target space while avoiding the strong bias and too low uncertainty ranges obtained when employing simplified model assumptions. These advantages also apply when using the posterior of the hyperparameters describing the aperture field to predict its effective hydraulic transmissivity.

How to cite: Friedli, L., Linde, N., Ginsbourger, D., Fernandez Visentini, A., and Doucet, A.: Inference of (geostatistical) hyperparameters with the correlated pseudo-marginal method, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1870, https://doi.org/10.5194/egusphere-egu22-1870, 2022.

This study proposes a new approach for quantitively assessing the importance of precipitation features in space and time to predict streamflow discharge (and, hence, sensitivity). For this, we combine well-performing deep-learning (DL) models with interpretability tools.

The DL models are composed of convolutional neural networks (CNNs) and long-short term memory (LSTM) networks. Their input is precipitation data distributed over the watershed and taken back in time (other inputs, meteorological and watershed properties, can also be included). Its output is streamflow discharge at a present or future time. Interpretability tools allow learning about the modeled system. We used the Integrated Gradients method that provides a level of importance (IG value) for each space-time precipitation feature for a given streamflow prediction. We applied the models and interpretability tools to several watersheds in the US and India.

To understand the importance of precipitation features for flood generation, we compared spatial and temporal patterns of IG for high flows vs. low and medium flows. Our results so far indicate some similar patterns for the two categories of flows, but others are distinctly different. For example, common IG mods exist at short times before the discharge, but mods are substantially different when considered further back in time. Similarly, some spatial cores of high IG appear in both flow categories, but other watershed cores are featured only for high flows. These IG time and space pattern differences are presumably associated with slow and fast flow paths and threshold-runoff mechanisms.

There are several advantages to the proposed approach: 1) recent studies have shown DL models to outperform standard process-based hydrological models, 2) given data availability and quality, DL models are much easier to train and validate, compared to process-based hydrological models, and therefore many watersheds can be included in the analysis, 3) DL models do not explicitly represent hydrological processes, and thus sensitivities derived in this approach are assured to represent patterns arise from the data. The main disadvantage of the proposed approach is its limitation to gauged watersheds only; however, large data sets are publicly available to exploit sensitivities of gauged streamflow.

It should be stressed out that learning about hydrological sensitivities with DL models is proposed here as a complementary approach to analyzing process-based hydrological models. Even though DL is considered black-box models, together with interpretability tools, they can highlight hard or impossible sensitivities to resolve with standard models.

How to cite: Morin, E., Rojas, R., and Wiesel, A.: Quantifying space-time patterns of precipitation importance for flood generation via interpretability of deep-learning models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1907, https://doi.org/10.5194/egusphere-egu22-1907, 2022.

EGU22-2220 | Presentations | HS3.6

Inversion of Hydraulic Tomography Data from the Grimsel Test Site with a Discrete Fracture Network Model 

Lisa Maria Ringel, Mohammadreza Jalali, and Peter Bayer

This study aims at the stochastic characterization of fractured rocks with a low-permeability matrix based on transient data from hydraulic tomography experiments. In such rocks, fractures function as main flowpaths. Therefore, adequate insight about distribution and properties of fractures is essential for many applications such as groundwater remediation, constructing nuclear waste repositories or developing enhanced geothermal systems. At the Grimsel test site in Switzerland, multiple hydraulic tests have been conducted to investigate the hydraulic properties and structure of the fracture network between two shear zones. We present results from combined stochastic inversion of these tests to infer the fracture network of the studied crystalline rock formation.

Data from geological mapping at Grimsel and the hydraulic tomography experiments that were undertaken as part of in-situ stimulation and circulation experiments provide the prior knowledge for the model inversion. This information is used for the setting-up of a site-specific conceptual model, to define the boundary and initial conditions of the groundwater flow model, and for the configuration of the inversion problem. The pressure signals we apply for the inversion stem from cross-borehole constant rate injection tests recorded at different depths, whereby the different intervals are isolated by packer systems.

In the forward model, the fractures are represented explicitly as three-dimensional (3D) discrete fracture network (DFN). The geometric and hydraulic properties of the DFN are described by the Bayesian equation. The properties are inferred by sampling iteratively from the posterior density function according to the reversible jump Markov chain Monte Carlo sampling strategy. The goal of this inversion is providing DFN realizations that minimize the error between the simulated and observed pressure signals and that meet the prior information. During the course of the inversion, the number of fractures is iteratively adjusted by adding or deleting a fracture. Furthermore, the parameters of the DFN are adapted by moving a fracture and by changing the fracture length or hydraulic properties. Thereby, the algorithm switches between updates that change the number of parameters and updates that keep the number of parameters but adjust their value. The inversion results reveal the main structural and hydraulic characteristics of the DFN, the preferential flowpaths, and the uncertainty of the estimated model parameters.

How to cite: Ringel, L. M., Jalali, M., and Bayer, P.: Inversion of Hydraulic Tomography Data from the Grimsel Test Site with a Discrete Fracture Network Model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2220, https://doi.org/10.5194/egusphere-egu22-2220, 2022.

EGU22-2388 | Presentations | HS3.6

Estimation of simulation parameters for steady and transient 3D flow modeling at watershed scale 

Gillien Latour, Pierre Horgue, François Renard, Romain Guibert, and Gérald Debenest
Unsaturated water flows at watershed scale or Darcy-scale are generally described by the Richardson-Richards equation. This equation is highly non-linear and simulation domains are limited by computational costs. The porousMultiphaseFoam toolbox is a Finite Volume tool capable of modeling multiphase flows in porous media, including the solving of the Richardson-Richards equation. As it has been developed using the OpenFOAM environment, the software is natively fully parallelized and can be used on super computers. By using experimental data from real site with geographical informations and piezometrics values, an iterative algorithm is set up to solve an inverse problem in order to evaluate an adequate permeability field. This procedure is initially implemented using simplified aquifer model with a 2D saturated modeling approach. A similar procedure using a full 3D model of the actual site is performed (handling both saturated and unsaturated area). The results are compared between the two approaches (2D and 3D) for steady simulations and new post-processing tools are also introduced to spatialize the error between the two models and define the areas for which the behaviour of the models is different. In a second part, an optimization of the Van Genuchten parameters is performed to reproduce transient experimental data. The 3D numerical results at the watershed scale are also compared to the reference simulations using a 1D unsaturated + 2D satured modeling approach.

How to cite: Latour, G., Horgue, P., Renard, F., Guibert, R., and Debenest, G.: Estimation of simulation parameters for steady and transient 3D flow modeling at watershed scale, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2388, https://doi.org/10.5194/egusphere-egu22-2388, 2022.

EGU22-2782 | Presentations | HS3.6

Global Sensitivity Analysis of an integrated parallel hydrologic model: ParFlow-CLM 

Wei Qu, Heye Bogena, Christoph Schüth, Harry Vereecken, and Stephan Schulz

An integrated parallel hydrologic model (ParFlow-CLM) was constructed to predict water and energy transport between subsurface, land surface, and atmosphere for a synthetic study using basic physical properties of the Stettbach headwater catchment, Germany. Based on this model, a global sensitivity analysis was performed using the Latin-Hypercube (LH) sampling strategy followed by the One-factor-At-a-Time (OAT) method to identify the most influential and interactive parameters affecting the main hydrologic processes. In addition, the sensitivity analysis was also carried out for assumptions of different slopes and meteorological conditions to show the transferability of the results to regions with other topographies and climates. Our results show that the simulated energy fluxes, i.e. latent heat flux, sensible heat flux and soil heat flux, are more sensitive to the parameters of wilting point, leaf area index, and stem area index, especially for steep slope and subarctic climate conditions. The simulated water fluxes, i.e. evaporation, transpiration, infiltration, and runoff, are most sensitive to soil porosity, van-Genuchen parameter n, wilting point, and leaf area index. The subsurface water storage and groundwater storage were most sensitive to soil porosity, while the surface water storage is most sensitive to the Manning’s n parameter. For the different slope and climate conditions, the rank order of in input parameter sensitivity was consistent, but the magnitude of parameter sensitivity was very different. The strongest deviation in parameter sensitivity occurred for sensible heat flux under different slope conditions and for transpiration under different climate conditions. This study provides an efficient method of the identification of the most important input parameters of the model and how the variation in the output of a numerical model can be attributed to variations of its input factors. The results help to better understand process representation of the model and reduce the computational cost of running high numbers of simulations. 

How to cite: Qu, W., Bogena, H., Schüth, C., Vereecken, H., and Schulz, S.: Global Sensitivity Analysis of an integrated parallel hydrologic model: ParFlow-CLM, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2782, https://doi.org/10.5194/egusphere-egu22-2782, 2022.

EGU22-3691 | Presentations | HS3.6

Hydrogeological inference by adaptive sequential Monte Carlo with geostatistical resampling model proposals 

Macarena Amaya, Niklas Linde, and Eric Laloy

For strongly non-linear inverse problems, Markov chain Monte Carlo (MCMC) methods may fail to properly explore the posterior probability density function (PDF). Particle methods are very well suited for parallelization and offer an alternative approach whereby the posterior PDF is approximated using the states and weights of a population of evolving particles. In addition, it provides reliable estimates of the evidence (marginal likelihood) that is needed for Bayesian model selection at essentially no cost. We consider adaptive sequential Monte Carlo (ASMC), which is an extension of annealed importance sampling (AIS). In these methods, importance sampling is performed over a sequence of intermediate distributions, known as power posteriors, linking the prior to the posterior PDF. The main advantages of ASMC with respect to AIS are that it adaptively tunes the tempering between neighboring distributions and it performs resampling of particles when the variance of the particle weights becomes too large. We consider a challenging synthetic groundwater transport inverse problem with a categorical channelized 2D hydraulic conductivity field designed such that the posterior facies distribution includes two distinct modes with equal probability. The model proposals are obtained by iteratively re-simulating a fraction of the current model using conditional multi-point statistics (MPS) simulations. We focus here on the ability of ASMC to explore the posterior PDF and compare it with previously published results obtained with parallel tempering (PT), a state-of-the-art MCMC inversion approach that runs multiple interacting chains targeting different power posteriors. For a similar computational budget involving 24 particles for ASMC and 24 chains for PT, the ASMC implementation outperforms the results obtained by PT: the models fit the data better and the reference likelihood value is contained in the ASMC sampled likelihood range, while this is not the case for PT range. Moreover, we show that ASMC recovers both reference modes, while none of them is recovered by PT. However, with 24 particles there is one of the modes that has a higher weight than the other while the approximation is improved when moving to a larger number of particles. As a future development, we suggest that including fast surrogate modeling (e.g., polynomial chaos expansion) within ASMC for the MCMC steps used to evolve the particles in-between importance sampling steps would strongly reduce the computational cost while still ensuring results of similar quality as the importance sampling steps could still be performed using the regular more costly forward solver.

How to cite: Amaya, M., Linde, N., and Laloy, E.: Hydrogeological inference by adaptive sequential Monte Carlo with geostatistical resampling model proposals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3691, https://doi.org/10.5194/egusphere-egu22-3691, 2022.

EGU22-3782 | Presentations | HS3.6

Uncertainty assessment and data-worth evaluation for estimating soil hydraulic parameters and recharge fluxes from lysimeter data 

Marleen Schübl, Christine Stumpp, and Giuseppe Brunetti

Transient measurements from lysimeters are frequently coupled with Richards-based solvers to inversely estimate soil hydraulic parameters (SHPs) and numerically describe vadose zone water fluxes, such as recharge. To reduce model predictive uncertainty, the lysimeter experiment should be designed to maximize the information content of observations. However, in practice, this is generally done by relying on the a priori expertise of the scientist/user, without exploiting the advantages of model-based experimental design. Thus, the main aim of this study is to demonstrate how model-based experimental design can be used to maximize the information content of observations in multiple scenarios encompassing different soil textural compositions and climatic conditions. The hydrological model HYDRUS is coupled with a Nested Sampling estimator to calculate the parameters’ posterior distributions and the Kullback-Leibler divergences. Results indicate that the combination of seepage flow, soil water content, and soil matric potential measurements generally leads to highly informative designs, especially for fine textured soils, while results from coarse soils are generally affected by higher uncertainty. Furthermore, soil matric potential proves to be more informative than soil water content measurements. Additionally, the propagation of parameter uncertainties in a contrasting (dry) climate scenario strongly increased prediction uncertainties for sandy soil, not only in terms of the cumulative amount and magnitude of the peak, but also in the temporal variability of the seepage flow. 

How to cite: Schübl, M., Stumpp, C., and Brunetti, G.: Uncertainty assessment and data-worth evaluation for estimating soil hydraulic parameters and recharge fluxes from lysimeter data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3782, https://doi.org/10.5194/egusphere-egu22-3782, 2022.

EGU22-6882 | Presentations | HS3.6 | Highlight

A review of conceptual model uncertainty in groundwater research 

Okke Batelaan, Trine Enemark, Luk Peeters, and Dirk Mallants

For more than a century, the strong advice in geology has been to rely on multiple working hypotheses. However, in groundwater research, as supported by modelling, often a stepwise approach with respect to complexity is promoted and preferred by many. Defining a hypothesis, let alone multiple hypotheses, and testing these via groundwater models is rarely applied. The so-called ‘conceptual model’ is generally considered the starting point of our beloved modelling method. A conceptual model summarises our current knowledge about a groundwater system, describing the hydrogeology and the dominating processes. Conceptual model development should involve formulating hypotheses and leading to choices in the modelling that steer the model predictions. As many conceptual models can explain the available data, multiple hypotheses allow assessing the conceptual or structural uncertainty.

This presentation aims to review some of the key ideas of 125 years of research on (not) handling conceptual hydrogeological uncertainty, identify current approaches, unify scattered insights, and develop a systematic methodology of hydrogeological conceptual model development and testing. We advocate for a systematic model development approach based on mutually exclusive, collectively exhaustive range of hypotheses, although this is not fully achievable. We provide examples of this approach and the consequential model testing. It is argued that following this scientific recipe of refuting alternative models; we will increase the learnings of our research, reduce the risk of conceptual surprises and improve the robustness of the groundwater assessments. We conclude that acknowledging and explicitly accounting for conceptual uncertainty goes a long way in producing more reproducible groundwater research. Hypothesis testing is essential to increase system understanding by analyzing and refuting alternative conceptual models. It also provides more confidence in groundwater model predictions leading to improved groundwater management, which is more important than ever.

How to cite: Batelaan, O., Enemark, T., Peeters, L., and Mallants, D.: A review of conceptual model uncertainty in groundwater research, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6882, https://doi.org/10.5194/egusphere-egu22-6882, 2022.

EGU22-7774 | Presentations | HS3.6

Efficient inversion with complex geostatistical priors using normalizing flows and variational inference 

Shiran Levy, Eric Laloy, and Niklas Linde

We propose an approach for solving geophysical inverse problems which significantly reduces computational costs as compared to Markov chain Monte Carlo (MCMC) methods while providing enhanced uncertainty quantification as compared to efficient gradient-based deterministic methods. The proposed approach relies on variational inference (VI), which seeks to approximate the unnormalized posterior distribution parametrically for a given family of distributions by solving an optimization problem. Although prone to bias if the family of distributions is too limited, VI provides a computationally-efficient approach that scales well to high-dimensional problems. To enhance the expressiveness of the parameterized posterior in the context of geophysical inverse problems, we use a combination of VI and inverse autoregressive flows (IAF), a type of normalizing flows that has been shown to be efficient for machine learning tasks. The IAF consists of invertible neural transport maps transforming an initial density of random variables into a target density, in which the mapping of each instance is conditioned on previous ones. In the combined VI-IAF routine, the approximate distribution is parameterized by the IAF, therefore, the potential expressiveness of the unnormalized posterior is determined by the architecture of the network. The parameters of the IAF are learned by minimizing the Kullback-Leibler divergence between the approximated posterior, which is obtained from samples drawn from a standard normal distribution that are pushed forward through the IAF, and the target posterior distribution. We test this approach on problems in which complex geostatistical priors are described by latent variables within a deep generative model (DGM) of the adversarial type. Previous results have concluded that inversion based on gradient-based optimization techniques perform poorly in this setting because of the high nonlinearity of the generator. Preliminary results involving linear physics suggest that the VI-IAF routine can recover the true model and provides high-quality uncertainty quantification at a low computational cost. As a next step, we will consider cases where the forward model is nonlinear and include comparison against standard MCMC sampling. As most of the inverse problem nonlinearity arises from the DGM generator, we do not expect significant differences in the quality of the approximations with respect to the linear physics case.

How to cite: Levy, S., Laloy, E., and Linde, N.: Efficient inversion with complex geostatistical priors using normalizing flows and variational inference, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7774, https://doi.org/10.5194/egusphere-egu22-7774, 2022.

EGU22-8583 | Presentations | HS3.6

Quantifying transport ability of hindcast and forecast ocean models 

Makrina Agaoglou, Guillermo García-Sánchez, Amaia Marcano Larrinaga, Gabriel Mouttapa, and Ana M. Mancho

In the last years, there has been much interest in uncertainty quantification involving trajectories in ocean data sets. As more and more oceanic data become available the assessing quality of ocean models to address transport problems like oil spills, chemical or plastic transportation becomes of vital importance. In our work we are using two types of ocean models: the hindcast and the forecast in a specific domain in the North Atlantic, where drifter trajectory data were available. The hindcast approach requires running ocean (or atmospheric) models for a past period the duration of which is usually for several decades. On the other hand forecast approach is to predict future stages. Both ocean products are provided by CMEMS. Hindcast data includes extra observational data that was time-delayed and therefore to the original forecast run. This means that in principle, hindcast data are more accurate than archived forecast data. In this work, we focus on the comparison of the transport capacity between hindcast and forecast products in the Gulf stream and the Atlantic Ocean, based on the dynamical structures of the dynamical systems describing the underlying transport problem, in the spirit of [1]. In this work, we go a step forwards, by quantifying the transport performance of each model against observed drifters using tools developed in [2].

Acknowledgments

MA acknowledges support from the grant CEX2019-000904-S and IJC2019-040168-I funded by: MCIN/AEI/ 10.13039/501100011033, AMM and GGS acknowledge support from CSIC PIE grant Ref. 202250E001.

References

[1] C. Mendoza, A. M. Mancho, and S. Wiggins, Lagrangian descriptors and the assessment of the predictive capacity of oceanic data sets, Nonlin. Processes Geophys., 21, 677–689, 2014, doi:10.5194/npg-21-677-2014

[2] G.García-Sánchez, A.M.Mancho, and S.Wiggins, A bridge between invariant dynamical structures and uncertainty quantification, Commun Nonlinear Sci Numer Simulat 104, 106016, 2022, doi:10.1016/j.cnsns.2021.106016 

How to cite: Agaoglou, M., García-Sánchez, G., Marcano Larrinaga, A., Mouttapa, G., and Mancho, A. M.: Quantifying transport ability of hindcast and forecast ocean models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8583, https://doi.org/10.5194/egusphere-egu22-8583, 2022.

Conceptual models are indispensable tools for hydrology. In order to use them for making probabilistic predictions, they need to be equipped with an adequate error model, which, for ease of inference, is traditionally formulated as an additive error on the output (discharge). However, the main sources of uncertainty in hydrological modelling are typically not to be found on the output, but on the input (rain) and in the model structure. Therefore, more reliable error models and probabilistic predictions can be obtained by incorporating those uncertainties directly where they arise, that is, into the model. This, however, leads us to stochastic models, which render traditional inference algorithms such as the Metropolis algorithm infeasible due to their expensive likelihood functions. However, thanks to recent advancements in algorithms and computing power, full-fledged Bayesian inference with stochastic models is no longer off-limit for hydrological applications. We demonstrate this with a case study from urban hydrology, for which we employ a highly efficient Hamiltonian Monte Carlo inference algorithm with a time-scale separation.

How to cite: Ulzega, S. and Albert, C.: Bayesian parameter inference in hydrological modelling using a Hamiltonian Monte Carlo approach with a stochastic rain model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8729, https://doi.org/10.5194/egusphere-egu22-8729, 2022.

In this work we introduce hydroMOPSO, a novel multi-objective R package that combines two search mechanisms to maintain diversity of the population and accelerate its convergence towards the Pareto-optimal set: Particle Swarm Optimisation (PSO) and genetic operations. hydroMOPSO is model-independent, which allows to interface any model code with the calibration engine, including models available in R (e.g., TUWmodel, airGR, topmodel), but also any other complex models that can be run from the system console (e.g. SWAT+, Raven, WEAP). In addition, hydroMOPSO is platform-independent, which allows it to run on GNU/Linux, Mac OSX and Windows systems, among others.

Considering the long execution time of some real-world models, we used three benchmark functions to search for a configuration that allows to reach the Pareto-optimal front with a low number of model evaluations, analysing different combinations of: i) the swarm size in PSO, ii) the maximum number of particles in the external archive, and iii) the maximum number of genetic operations in the external archive. In addition, the previous configuration was then evaluated against other state-of-the-art multi-objective optimisation algorithms (MMOPSO, NSGA-II, NSGA-III). Finally, hydroMOPSO was used to calibrate a GR4J-CemaNeige hydrological model implemented in the Raven modelling framework (http://raven.uwaterloo.ca), using two goodness-of-fit functions: i) the modified Kling-Gupta efficiency (KGE') and ii) the Nash-Sutcliffe efficiency with inverted flows (iNSE).

Our results showed that the configuration selected for hydroMOPSO makes it very competitive or even superior against MMOPSO, NSGA-II and NSGA- III in terms of the number of function evaluations required to achieve stabilisation in the Pareto front, and also showed some advantages of using a compromise solution instead of a single-objective one for the estimation of hydrological model parameters.

How to cite: Marinao-Rivas, R. and Zambrano-Bigiarini, M.: hydroMOPSO: A versatile Particle Swarm Optimization R package for multi-objective calibration of environmental and hydrological models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9902, https://doi.org/10.5194/egusphere-egu22-9902, 2022.

EGU22-10431 | Presentations | HS3.6

Consistency and variability of spatial and temporal patterns of parameter dominance on four simulated hydrological variables in mHM in a large basin study 

Björn Guse, Stefan Lüdtke, Oldrich Rakovec, Stephan Thober, Thorsten Wagener, and Luis Samaniego

Model parameters are implemented in hydrological models to represent hydrological processes as accurate as possible under different catchment conditions. In the case of the mesoscale Hydrological Model (mHM), its parameters are estimated via transfer functions and scaling rules using the Multiscale Parameter Regionalization (MPR) approach [1]. Hereby, one consistent parameter set is selected for the entire model domain. To understand the impact of model parameters on simulated variables under different hydrological conditions, the spatio-temporal variability of parameter dominance and its relationship to the corresponding processes needs to be investigated.

In this study, mHM is applied to more than hundred German basins including the headwater areas in neighboring countries. To analyze the relevance of model parameters, a temporally resolved parameter sensitivity analysis using the FAST algorithm [2] is applied to derive dominant model parameters for each day. The temporal scale was further aggregated to monthly and seasonal averaged sensitivities. In analyzing a large number of basins, not only the temporal but also the spatial variability in the parameter relevance could be assessed. Four hydrological variables were used as target variable for the sensitivity analysis, i.e. runoff, actual evapotranspiration, soil moisture and groundwater recharge.

The analysis of the temporal parameter sensitivity shows that the dominant parameters vary in space and time and in using different target variables. Soil material parameters are most dominant on runoff and recharge. A switch in parameter dominance between different seasons was detected for an infiltration and an evapotranspiration parameter that are dominant on soil moisture in winter and summer, respectively. The opposite seasonal dominance pattern of these two parameters was identified on actual evapotranspiration. Further, each parameter shows high sensitivities to either high or low values of one or more hydrological variable(s). The parameter estimation approach leads to spatial consistent patterns of parameter dominances. Spatial differences and similarities in parameter sensitivities could be explained by catchment variability.

The results improve the understanding of how model parameter controls the simulated processes in mHM. This information could be useful for more efficient parameter identification, model calibration and improved MPR transfer functions.

 

References

[1] Samaniego et al. (2010, WRR), https://doi.org/10.1029/2008WR007327

[2] Reusser et al. (2011, WRR), https://doi.org/10.1029/2010WR009947

How to cite: Guse, B., Lüdtke, S., Rakovec, O., Thober, S., Wagener, T., and Samaniego, L.: Consistency and variability of spatial and temporal patterns of parameter dominance on four simulated hydrological variables in mHM in a large basin study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10431, https://doi.org/10.5194/egusphere-egu22-10431, 2022.

EGU22-10654 | Presentations | HS3.6 | Highlight

Uncertainty assessment with Bluecat: Recognising randomness as a fundamental component of physics 

Alberto Montanari and Demetris Koutsoyiannis

We present a new method for simulating and predicting hydrologic variables and in particular river flows, which is rooted in the probability theory and conceived in order to provide a reliable quantification of its uncertainty for operational applications. In fact, recent practical experience during extreme events has shown that simulation and prediction uncertainty is essential information for decision makers and the public. A reliable and transparent uncertainty assessment has also been shown to be essential to gain public and institutional trust in real science. Our approach, that we term with the acronym "Bluecat", assumes that randomness is a fundamental component of physics and results from a theoretical and numerical development. Bluecat is conceived to make a transparent and intuitive use of uncertain observations which in turn mirror the observed reality. Therefore, Bluecat makes use of a rigorous theory while at the same time proofing the concept that environmental resources should be managed by making the best use of empirical evidence and experience and recognising randomness as an intrinsic property of hydrological systems. We provide an open and user friendly software to apply the method to the simulation and prediction of river flows and test Bluecat's reliability for operational applications.

How to cite: Montanari, A. and Koutsoyiannis, D.: Uncertainty assessment with Bluecat: Recognising randomness as a fundamental component of physics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10654, https://doi.org/10.5194/egusphere-egu22-10654, 2022.

EGU22-11794 | Presentations | HS3.6

Effect of regional heterogeneities on inversion stability and estimated hydraulic properties field 

Hervé Jourde, Mohammed Aliouache, Pierre Fischer, Xiaoguang Wang, and Gerard Massonnat

Hydraulic tomography showed great potential on estimating the spatial distribution of heterogeneous aquifer properties in the last decade.  Though this method is highly performant on synthetic studies, the transition from an application to synthetic models to real field applications is often associated to numerical instabilities. Inversion techniques can also suffer from ill-posedness and non-uniqueness of the estimates since several solutions might correctly mimic the observed hydraulic data. In this work, we investigate the origin of the instabilities observed when trying to perform HT using real field drawdown data. We firstly identify the cause of these instabilities. We then use different approaches, where one is proposed, in order to regain inverse model stability, which also allows to estimate different hydraulic property fields at local and regional scales. Results show that ill-posed models can lead into inversion instability while different approaches that limit these instabilities may lead into different estimates. The study also shows that the late time hydraulic responses are strongly linked to the boundary conditions and thus to the regional heterogeneity. Accordingly, the use on these late-time data in inversion might require a larger dimension of the inverted domain, so that it is recommended to position the boundary conditions of the forward model far away from the wells. Also, the use of the proposed technique might provide a performant tool to obtain a satisfying fitting of observation, but also to assess both the site scale heterogeneity and the surrounding variabilities.

How to cite: Jourde, H., Aliouache, M., Fischer, P., Wang, X., and Massonnat, G.: Effect of regional heterogeneities on inversion stability and estimated hydraulic properties field, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11794, https://doi.org/10.5194/egusphere-egu22-11794, 2022.

EGU22-11844 | Presentations | HS3.6

Benchmarking Automatically Identified Model Structures with a Large Model Ensemble 

Diana Spieler, Kan Lei, and Niels Schütze

Recent studies have introduced methods to simultaneously calibrate model structure choices and parameter values to identify an appropriate (conceptual) model structure for a given catchment. This can be done through mixed-integer optimization to identify the graph structure that links dominant flow processes (Spieler et al., 2020) or, likewise, by continuous optimization of weights when blending multiple flux equations to describe flow processes within a model (Chlumsky et al., 2021). Here, we use the combination of the mixed-integer optimization algorithm DDS and the modular modelling framework RAVEN and refer to it as Automatic Model Structure Identification (AMSI) framework.

This study validates the AMSI framework by comparing the performance of the identified AMSI model structures to two different benchmark ensembles. The first ensemble consists of the best model structures from the brute force calibration of all possible structures included in the AMSI model space (7488+). The second ensemble consists of 35+ MARRMoT structures representing a structurally more divers set of models than currently implemented in the AMSI framework. These structures stem from the MARRMoT Toolbox introduced by Knoben et al. (2019) providing established conceptual model structures based on hydrologic literature.

We analyze if the model structure(s) AMSI identifies are identical to the best performing structures of the brute force calibration and comparable in their performance to the MARRMoT ensemble. We can conclude that model structures identified with the AMSI framework can compete with the structurally more divers MARRMoT ensemble. In fact, we were surprised to see how well we do with a simple two storage structure over the 12 tested MOPEX catchments (Duan et al.,2006). We aim to discuss several emerging questions, such as the selection of a robust model structure, Equifinality in model structures, and the role of structural complexity.

 

Spieler et al. (2020). https://doi.org/10.1029/2019WR027009

Chlumsky et al. (2021). https://doi.org/10.1029/2020WR029229

Knoben et al. (2019). https://doi.org/10.5194/gmd-12-2463-2019

Duan et al. (2006). https://doi.org/10.1016/j.jhydrol.2005.07.031

How to cite: Spieler, D., Lei, K., and Schütze, N.: Benchmarking Automatically Identified Model Structures with a Large Model Ensemble, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11844, https://doi.org/10.5194/egusphere-egu22-11844, 2022.

Pearson’s correlation is usually used as a criterion for the presence or absence of a relationship between time series, but it is not always indicative for nonlinear systems like climate. Therefore, we implement one of the methods of nonlinear dynamics to detect connections in the Sun-climate system. Here we estimate the causal relationship between Total Solar Irradiance (TSI) and Ocean climate indices over the past few decades using the method of conditional dispersions (Cenys et al., 1991). We use a conceptual ocean-atmosphere model (Jin, 1997) with TSI added as a forcing to calibrate the method. We show that the method provides expected results for connection between TSI and the model temperature. Premixing of Gaussian noise to model data leads to decrease of detectable causality with increase of noise amplitude, and the similar effect occurs in empirical data. Moreover, in the case of the empirical data, we show that the method can be used to independently estimate uncertainties of Ocean climate indices.

How to cite: Skakun, A. and Volobuev, D.: Ocean climate indices and Total Solar Irradiance: causality over the past few decades and revision of indices uncertainties, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12691, https://doi.org/10.5194/egusphere-egu22-12691, 2022.

EGU22-20 | Presentations | ITS2.6/AS5.1

PRECISIONPOP: a multi-scale monitoring system for poplar plantations integrating field, aerial and satellite remote sensing 

Francesco Chianucci, Francesca Giannetti, Clara Tattoni, Nicola Puletti, Achille Giorcelli, Carlo Bisaglia, Elio Romano, Massimo Brambilla, Piermario Chiarabaglio, Massimo Gennaro, Giovanni d'Amico, Saverio Francini, Walter Mattioli, Domenico Coaloa, Piermaria Corona, and Gherardo Chirici

Poplar (Populus spp.) plantations are globally widespread in the Northern Hemisphere, and provide a wide range of benefits and products, including timber, carbon sequestration and phytoremediation. Because of poplar specific features (fast growth, short rotation) the information needs require frequent updates, which exceed the traditional scope of National Forest Inventories, implying the need for ad-hoc monitoring solutions.

Here we presented a regional-level multi-scale monitoring system developed for poplar plantations, which is based on the integration of different remotely-sensed informations at different spatial scales, developed in Lombardy (Northern Italy) region. The system is based on three levels of information: 1) At plot scale, terrestrial laser scanning (TLS) was used to develop non-destructive tree stem volume allometries in calibration sites; the produced allometries were then used to estimate plot-level stand parameters from field inventory; additional canopy structure attributes were derived using field digital cover photography. 2) At farm level, unmanned aerial vehicles (UAVs) equipped with multispectral sensors were used to upscale results obtained from field data. 3) Finally, both field and unmanned aerial estimates were used to calibrate a regional-scale supervised continuous monitoring system based on multispectral Sentinel-2 imagery, which was implemented and updated in a Google Earth Engine platform.

The combined use of multi-scale information allowed an effective management and monitoring of poplar plantations. From a top-down perspective, the continuous satellite monitoring system allowed the detection of early warning poplar stress, which are suitable for variable rate irrigation and fertilizing scheduling. From a bottom-up perspective, the spatially explicit nature of TLS measurements allows better integration with remotely sensed data, enabling a multiscale assessment of poplar plantation structure with different levels of detail, enhancing conventional tree inventories, and supporting effective management strategies. Finally, use of UAV is key in poplar plantations as their spatial resolution is suited for calibrating metrics from coarser remotely-sensed products, reducing or avoiding the need of ground measurements, with a significant reduction of time and costs.

How to cite: Chianucci, F., Giannetti, F., Tattoni, C., Puletti, N., Giorcelli, A., Bisaglia, C., Romano, E., Brambilla, M., Chiarabaglio, P., Gennaro, M., d'Amico, G., Francini, S., Mattioli, W., Coaloa, D., Corona, P., and Chirici, G.: PRECISIONPOP: a multi-scale monitoring system for poplar plantations integrating field, aerial and satellite remote sensing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-20, https://doi.org/10.5194/egusphere-egu22-20, 2022.

EGU22-124 | Presentations | ITS2.6/AS5.1

Unsupervised machine learning driven Prospectivity analysis of REEs in NE India 

Malcolm Aranha and Alok Porwal

Traditional mineral prospectivity modelling for mineral exploration and targeting relies heavily on manual data filtering and processing to extract desirable geologic features based on expert knowledge. It involves the integration of geological predictor maps that are manually derived by time-consuming and labour-intensive pre-processing of primary geoscientific data to serve as spatial proxies of mineralisation processes. Moreover, the selection of these spatial proxies is guided by conceptual genetic modelling of the targeted deposit type, which may be biased by the subjective preference of an expert geologist. This study applies Self-Organising Maps (SOM), a neural network-based unsupervised machine learning clustering algorithm, to gridded geophysical and topographical datasets in order to identify and delineate regional-scale exploration targets for carbonatite-alkaline-complex-related REE deposits in northeast India. The study did not utilise interpreted and processed or manually generated data, such as surface or bed-rock geological maps, fault traces, etc., and relies on the algorithm to identify crucial features and delineate prospective areas. The obtained results were then compared with those obtained from a previous supervised knowledge-driven prospectivity analysis. The results were found to be comparable. Therefore, unsupervised machine learning algorithms are reliable tools to automate the manual process of mineral prospectivity modelling and are robust, time-saving alternatives to knowledge-driven or supervised data-driven prospectivity modelling. These methods would be instrumental in unexplored terrains for which there is little or no geological knowledge available. 

How to cite: Aranha, M. and Porwal, A.: Unsupervised machine learning driven Prospectivity analysis of REEs in NE India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-124, https://doi.org/10.5194/egusphere-egu22-124, 2022.

EGU22-654 | Presentations | ITS2.6/AS5.1

On the derivation of data-driven models for partially observed systems 

Said Ouala, Bertrand Chapron, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

When considering the modeling of dynamical systems, the increasing interest in machine learning, artificial intelligence and more generally, data-driven representations, as well as the increasing availability of data, motivated the exploration and definition of new identification techniques. These new data-driven representations aim at solving modern questions regarding the modeling, the prediction and ultimately, the understanding of complex systems such as the ocean, the atmosphere and the climate. 

In this work, we focus on one question regarding the ability to define a (deterministic) dynamical model from a sequence of observations. We focus on sea surface observations and show that these observations typically relate to some, but not all, components of the underlying state space, making the derivation of a deterministic model in the observation space impossible. In this context, we formulate the identification problem as the definition, from data, of an embedding of the observations, parameterized by a differential equation. When compared to state-of-the-art techniques based on delay embedding and linear decomposition of the underlying operators, the proposed approach benefits from all the advances in machine learning and dynamical systems theory in order to define, constrain and tune the reconstructed sate space and the approximate differential equation. Furthermore, the proposed embedding methodology naturally extends to cases in which a dynamical prior (derived for example using physical principals) is known, leading to relevant physics informed data-driven models. 

How to cite: Ouala, S., Chapron, B., Collard, F., Gaultier, L., and Fablet, R.: On the derivation of data-driven models for partially observed systems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-654, https://doi.org/10.5194/egusphere-egu22-654, 2022.

EGU22-1255 | Presentations | ITS2.6/AS5.1

A Deep Learning approach to de-bias Air Quality forecasts, using heterogeneous Open Data sources as reference 

Antonio Pérez, Mario Santa Cruz, Johannes Flemming, and Miha Razinger

The degradation of air quality is a challenge that policy-makers face all over the world. According to the World Health Organisation, air pollution causes an estimate of 7 million premature deaths every year. In this context, air quality forecasts are crucial tools for decision- and policy-makers, to achieve data-informed decisions.

Global forecasts, such as the Copernicus Atmosphere monitoring service model (CAMS), usually exhibit biases: systematic deviations from observations. Adjusting these biases is typically the first step towards obtaining actionable air quality forecasts. It is especially relevant in health-related decisions, when the metrics of interest depend on specific thresholds.

AQ (Air quality) - Bias correction was a project funded by the ECMWF Summer of Weather Code (ESOWC) 2021 whose aim is to improve CAMS model forecasts for air quality variables (NO2, O3, PM2.5), using as a reference the in-situ observations provided by OpenAQ. The adjustment, based on machine learning methods, was performed over a set of specific interesting locations provided by the ECMWF, for the period June 2019 to March 2021.

The machine learning approach uses three different deep learning based models, and an extra neural network that gathers the output of the three previous models. From the three DL-based models, two of them are independent and follow the same structure built upon the InceptionTime module: they use both meteorological and air quality variables, to exploit the temporal variability and to extract the most meaningful features of the past [t-24h, t-23h, … t-1h] and future [t, t+1h, …, t+23h] CAMS predictions. The third model uses the station static attributes (longitude, latitude and elevation), and a multilayer perceptron interacts with the station attributes. The extracted features from these three models are fed into another multilayer perceptron, to predict the upcoming errors with hourly resolution [t, t+1h, …, t+23h]. As a final step, 5 different initializations are considered, assembling them with equal weights to have a more stable regressor.

Previous to the modelisation, CAMS forecasts of air quality variables were actually biassed independently from the location of interest and the variable (on average: biasNO2 = -22.76, biasO3 = 44.30, biasPM2.5 = 12.70). In addition, the skill of the model, measured by the Pearson correlation, did not reach 0.5 for any of the variables—with remarkable low values for NO2 and O3 (on average: pearsonNO2 = 0.10, pearsonO3 = 0.14).

AQ-BiasCorrection modelisation properly corrects these biases. Overall, the number of stations that improve the biases both in train and test sets are: 52 out of 61 (85%) for NO2, 62 out of 67 (92%) for O3, and 80 out of 102 (78%) for PM2.5. Furthermore, the bias improves with declines of -1.1%, -9.7% and -13.9% for NO2, O3 and PM2.5 respectively. In addition, there is an increase in the model skill measured through the Pearson correlation, reaching values in the range of 100-400% for the overall improvement of the variable skill.

How to cite: Pérez, A., Santa Cruz, M., Flemming, J., and Razinger, M.: A Deep Learning approach to de-bias Air Quality forecasts, using heterogeneous Open Data sources as reference, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1255, https://doi.org/10.5194/egusphere-egu22-1255, 2022.

EGU22-1992 | Presentations | ITS2.6/AS5.1

Approximating downward short-wave radiation flux using all-sky optical imagery using machine learning trained on DASIO dataset. 

Vasilisa Koshkina, Mikhail Krinitskiy, Nikita Anikin, Mikhail Borisov, Natalia Stepanova, and Alexander Osadchiev

Solar radiation is the main source of energy on Earth. Cloud cover is the main physical factor limiting the downward short-wave radiation flux. In modern models of climate and weather forecasts, physical models describing the passage of radiation through clouds may be used. This is a computationally extremely expensive option for estimating downward radiation fluxes. Instead, one may use parameterizations which are simplified schemes for approximating environmental variables. The purpose of this work is to improve the accuracy of the existing parametrizations of downward shortwave radiation fluxes. We solve the problem using various machine learning (ML) models for approximating downward shortwave radiation flux using all-sky optical imagery. We assume that an all-sky photo contains complete information about the downward shortwave radiation. We examine several types of ML models that we trained on dataset of all-sky imagery accompanied by short-wave radiation flux measurements. The Dataset of All-Sky Imagery over the Ocean (DASIO) is collected in Indian, Atlantic and Arctic oceans during several oceanic expeditions from 2014 till 2021. The quality of the best classic ML model is better compared to existing parameterizations known from literature. We will show the results of our study regarding classic ML models as well as the results of an end-to-end ML approach involving convolutional neural networks. Our results allow us to assume one may acquire downward shortwave radiation fluxes directly from all-sky imagery. We will also cover some downsides and limitations of the presented approach.

How to cite: Koshkina, V., Krinitskiy, M., Anikin, N., Borisov, M., Stepanova, N., and Osadchiev, A.: Approximating downward short-wave radiation flux using all-sky optical imagery using machine learning trained on DASIO dataset., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1992, https://doi.org/10.5194/egusphere-egu22-1992, 2022.

EGU22-2058 | Presentations | ITS2.6/AS5.1

Deep learning for ensemble forecasting 

Rüdiger Brecht and Alexander Bihlo
Ensemble prediction systems are an invaluable tool for weather prediction. Practically, ensemble predictions are obtained by running several perturbed numerical simulations. However, these systems are associated with a high computational cost and often involve statistical post-processing steps to improve their qualities.
Here we propose to use a deep-learning-based algorithm to learn the statistical properties of a given ensemble prediction system, such that this system will not be needed to simulate future ensemble forecasts. This way, the high computational costs of the ensemble prediction system can be avoided while still obtaining the statistical properties from a single deterministic forecast. We show preliminary results where we demonstrate the ensemble prediction properties for a shallow water unstable jet simulation on the sphere. 

How to cite: Brecht, R. and Bihlo, A.: Deep learning for ensemble forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2058, https://doi.org/10.5194/egusphere-egu22-2058, 2022.

Numerical weather prediction (NWP) models are currently popularly used for operational weather forecast in meteorological centers. The NWP models describe the flow of fluids by employing a set of governing equations, physical parameterization schemes and initial and boundary conditions. Thus, it often face bias of prediction due to insufficient data assimilation, assumptions or approximations of dynamical and physical processes. To make gridded forecast of rainfall with high confidence, in this study, we present a data-driven deep learning model for correction of rainfall from NWP model, which mainly includes a confidence network and a combinatorial network. Meanwhile, a focal loss is introduced to deal with the characteristics of longtail-distribution of rainfall. It is expected to alleviate the impact of the large span of rainfall magnitude by transferring the regression problem into several binary classification problems. The deep learning model is used to correct the gridded forecasts of rainfall from the European Centre for Medium-Range Weather Forecast Integrated Forecasting System global model (ECMWF-IFS) with a forecast lead time of 24 h to 240 h in Eastern China. First, the rainfall forecast correction problem is treated as an image-to-image translation problem in deep learning under the neural networks. Second, the ECMWF-IFS forecasts and rainfall observations in recent years are used as training, validation, and testing datasets. Finally, the correction performance of the new machine learning model is evaluated and compared to several classical machine learning algorithms. By performing a set of experiments for rainfall forecast error correction, it is found that the new model can effectively forecast rainfall over East China region during the flood season of the year 2020. Experiments also demonstrate that the proposed approach generally performs better in bias correction of rainfall prediction than most of the classical machine learning approaches .

How to cite: Ma, L.: A Deep Learning Bias Correction Approach for Rainfall Numerical Prediction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2095, https://doi.org/10.5194/egusphere-egu22-2095, 2022.

EGU22-2893 | Presentations | ITS2.6/AS5.1 | Highlight

Bias Correction of Operational Storm Surge Forecasts Using Neural Networks 

Paulina Tedesco, Jean Rabault, Martin Lilleeng Sætra, Nils Melsom Kristensen, Ole Johan Aarnes, Øyvind Breivik, and Cecilie Mauritzen

Storm surges can give rise to extreme floods in coastal areas. The Norwegian Meteorological Institute (MET Norway) produces 120-hour regional operational storm surge forecasts along the coast of Norway based on the Regional Ocean Modeling System (ROMS). Despite advances in the development of models and computational capability, forecast errors remain large enough to impact response measures and issued alerts, in particular, during the strongest storm events. Reducing these errors will positively impact the efficiency of the warning systems while minimizing efforts and resources spent on mitigation.

Here, we investigate how forecasts can be improved with residual learning, i.e., training data-driven models to predict, and correct, the error in the ROMS output. For this purpose, sea surface height data from stations around Norway were collected and compared with the ROMS output.

We develop two different residual learning frameworks that can be applied on top of the ROMS output. In the first one, we perform binning of the model error, conditionalized by pressure, wind, and waves. Clear error patterns are visible when the error conditioned by the wind is plotted in a polar plot for each station. These error maps can be stored as correction lookup tables to be applied on the ROMS output. However, since wind, pressure, and waves are correlated, we cannot simultaneously correct the error associated with each variable using this method. To overcome this limitation, we develop a second method, which resorts to Neural Networks (NNs) to perform nonlinear modeling of the error pattern obtained at each station. 

The residual NN method strongly outperforms the error map method, and is a promising direction for correcting storm surge models operationally. Indeed, i) this method is applied on top of the existing model and requires no changes to it, ii) all predictors used for NN inference are available operationally, iii) prediction by the NN is very fast, typically a few seconds per station, and iv) the NN correction can be provided to a human expert who gets to inspect it, compare it with the ROMS output, and see how much correction is brought by the NN. Using this NN residual error correction method, the RMS error in the Oslofjord is reduced by typically 7% for lead times of 24 hours, 17% for 48 hours, and 35% for 96 hours.

How to cite: Tedesco, P., Rabault, J., Sætra, M. L., Kristensen, N. M., Aarnes, O. J., Breivik, Ø., and Mauritzen, C.: Bias Correction of Operational Storm Surge Forecasts Using Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2893, https://doi.org/10.5194/egusphere-egu22-2893, 2022.

EGU22-3977 | Presentations | ITS2.6/AS5.1 | Highlight

Learning quasi-geostrophic turbulence parametrizations from a posteriori metrics 

Hugo Frezat, Julien Le Sommer, Ronan Fablet, Guillaume Balarac, and Redouane Lguensat

Machine learning techniques are now ubiquitous in the geophysical science community. They have been applied in particular to the prediction of subgrid-scale parametrizations using data that describes small scale dynamics from large scale states. However, these models are then used to predict temporal trajectories, which is not covered by this instantaneous mapping. Following the model trajectory during training can be done using an end-to-end approach, where temporal integration is performed using a neural network. As a consequence, the approach is shown to optimize a posteriori metrics, whereas the classical instantaneous training is limited to a priori ones. When applied on a specific energy backscatter problem, found in quasi-geostrophic turbulent flows, the strategy demonstrates long-term stability and high fidelity statistical performance, without any increase in computational complexity during rollout. These improvements may question the future development of realistic subgrid-scale parametrizations in favor of differentiable solvers, required by the a posteriori strategy.

How to cite: Frezat, H., Le Sommer, J., Fablet, R., Balarac, G., and Lguensat, R.: Learning quasi-geostrophic turbulence parametrizations from a posteriori metrics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3977, https://doi.org/10.5194/egusphere-egu22-3977, 2022.

EGU22-4062 | Presentations | ITS2.6/AS5.1

Climatological Ocean Surface Wave Projections using Deep Learning 

Peter Mlakar, Davide Bonaldo, Antonio Ricchi, Sandro Carniel, and Matjaž Ličer

We present a numerically cheap machine-learning model which accurately emulates the performances of the surface wave model Simulating WAves Near Shore (SWAN) in the Adriatic basin (north-east Mediterranean Sea).

A ResNet50 inspired deep network architecture with customized spatio-temporal attention layers was used, the network being trained on a 1970-1997 dataset of time-dependent features based on wind fields retrieved from the COSMO-CLM regional climate model (The authors acknowledge Dr. Edoardo Bucchignani (Meteorology Laboratory, Centro Italiano Ricerche Aerospaziali -CIRA-, Capua, Italy), for providing the COSMO-CLM wind fields). SWAN surface wave model outputs for the period of 1970-1997 are used as labels. The period 1998-2000 is used to cross-validate that the network very accurately reproduces SWAN surface wave features (i.e. significant wave height, mean wave period, mean wave direction) at several locations in the Adriatic basin. 

After successful cross validation, a series of projections of ocean surface wave properties based on climate model projections for the end of 21st century (under RCP 8.5 scenario) are performed, and shifts in the emulated wave field properties are discussed.

How to cite: Mlakar, P., Bonaldo, D., Ricchi, A., Carniel, S., and Ličer, M.: Climatological Ocean Surface Wave Projections using Deep Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4062, https://doi.org/10.5194/egusphere-egu22-4062, 2022.

EGU22-4493 | Presentations | ITS2.6/AS5.1 | Highlight

Semi-automatic tuning procedure for a GCM targeting continental surfaces: a first experiment using in situ observations 

Maëlle Coulon--Decorzens, Frédérique Cheruy, and Frédéric Hourdin

The tuning or calibration of General Circulation Models (GCMs) is an essential stage for their proper behavior. The need to have the best climate projections in the regions where we live drives the need to tune the models in particular towards the land surface, bearing in mind that the interactions between the atmosphere and the land surface remain a key source of uncertainty in regional-scale climate projections [1].

For a long time, this tuning has been done by hand, based on scientific expertise and has not been sufficiently documented [2]. Recent tuning tools offer the possibility to accelerate climate model development, providing a real tuning formalism as well as a new way to understand climate models. High Tune explorer is one of these statistic tuning tool, involving machine learning and based on uncertainty quantification. It aims to reduce the range of free parameters that allow realistic model behaviour [3]. A new automatic tuning experiment was developed with this tool for the atmospheric component of the IPSL GCM model, LMDZ. It was first tuned at the process level, using several single column test cases compared to large eddies simulations; and then at the global level by targeting radiative metrics at the top of the atmosphere [4].

We propose to add a new step to this semi-automatic tuning procedure targeting atmosphere and land-surface interactions. The first aspect of the proposition is to compare coupled atmosphere-continent simulations (here running LMDZ-ORCHIDEE) with in situ observations from the SIRTA observatory located southwest of Paris. In situ observations provide hourly joint colocated data with a strong potential for the understanding of the processes at stake and their representation in the model. These data are also subject to much lower uncertainties than the satellite inversions with respect to the surface observations. In order to fully benefit from the site observations, the model winds are nudged toward reanalysis. This forces the simulations to follow the effective meteorological sequence, thus allowing the comparison between simulations and observations at the process time scale. The removal of the errors arising from the representation of large-scale dynamics makes the tuning focus on the representation of physical processes «at a given meteorological situation». Finally, the model grid is zoomed in on the SIRTA observatory in order to reduce the computational cost of the simulations while preserving a fine mesh around this observatory.

We show the results of this new tuning step, which succeeds in reducing the domain of acceptable free parameters as well as the dispersion of the simulations. This method, which is less computationally costly than global tuning, is therefore a good way to precondition the latter. It allows the joint tuning of atmospheric and land surface models, traditionally tuned separately [5], and has the advantage of remaining close to the processes and thus improving their understanding.

References:

[1] Cheruy et al., 2014, https://doi.org/10.1002/2014GL061145

[2] Hourdin et al., 2017, https://doi.org/10.1175/BAMS-D-15-00135.1

[3] Couvreux et al., 2021, https://doi.org/10.1029/2020MS002217

[4] Hourdin et al., 2021, https://doi.org/10.1029/2020MS002225

[5] Cheruy et al., 2020, https://doi.org/10.1029/2019MS002005

How to cite: Coulon--Decorzens, M., Cheruy, F., and Hourdin, F.: Semi-automatic tuning procedure for a GCM targeting continental surfaces: a first experiment using in situ observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4493, https://doi.org/10.5194/egusphere-egu22-4493, 2022.

EGU22-4923 | Presentations | ITS2.6/AS5.1

Constrained Generative Adversarial Networks for Improving Earth System Model Precipitation 

Philipp Hess, Markus Drüke, Stefan Petri, Felix Strnad, and Niklas Boers

The simulation of precipitation in numerical Earth system models (ESMs) involves various processes on a wide range of scales, requiring high temporal and spatial resolution for realistic simulations. This can lead to biases in computationally efficient ESMs that have a coarse resolution and limited model complexity. Traditionally, these biases are corrected by relating the distributions of historical simulations with observations [1]. While these methods successfully improve the modelled statistics, unrealistic spatial features that require a larger spatial context are not addressed.

Here we apply generative adversarial networks (GANs) [2] to transform precipitation of the CM2Mc-LPJmL ESM [3] into a bias-corrected and more realistic output. Feature attribution shows that the GAN has correctly learned to identify spatial regions with the largest bias during training. Our method presents a general bias correction framework that can be extended to a wider range of ESM variables to create highly realistic but computationally inexpensive simulations of future climates. We also discuss the generalizability of our approach to projections from CMIP6, given that the GAN is only trained on historical data.

[1] A.J. Cannon et al. "Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes?." Journal of Climate 28.17 (2015): 6938-6959.

[2] I. Goodfellow et al. "Generative adversarial nets." Advances in neural information processing systems 27 (2014).

[3] M. Drüke et al. "CM2Mc-LPJmL v1.0: Biophysical coupling of a process-based dynamic vegetation model with managed land to a general circulation model." Geoscientific Model Development 14.6 (2021): 4117--4141.

How to cite: Hess, P., Drüke, M., Petri, S., Strnad, F., and Boers, N.: Constrained Generative Adversarial Networks for Improving Earth System Model Precipitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4923, https://doi.org/10.5194/egusphere-egu22-4923, 2022.

EGU22-5219 | Presentations | ITS2.6/AS5.1 | Highlight

Neural Partial Differential Equations for Atmospheric Dynamics 

Maximilian Gelbrecht and Niklas Boers

When predicting complex systems such as parts of the Earth system, one typically relies on differential equations which can often be incomplete, missing unknown influences or higher order effects. Using the universal differential equations framework, we can augment the equations with artificial neural networks that can compensate these deficiencies. We show that this can be used to predict the dynamics of high-dimensional spatiotemporally chaotic partial differential equations, such as the ones describing atmospheric dynamics. In a first step towards a hybrid atmospheric model, we investigate the Marshall Molteni Quasigeostrophic Model in the form of a Neural Partial Differential Equation. We use it in synthetic examples where parts of the governing equations are replaced with artificial neural networks (ANNs) and demonstrate how the ANNs can recover those terms.

How to cite: Gelbrecht, M. and Boers, N.: Neural Partial Differential Equations for Atmospheric Dynamics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5219, https://doi.org/10.5194/egusphere-egu22-5219, 2022.

EGU22-5631 | Presentations | ITS2.6/AS5.1

Autonomous Assessment of Source Area Distributions for Sections in Lagrangian Particle Release Experiments 

Carola Trahms, Patricia Handmann, Willi Rath, Matthias Renz, and Martin Visbeck

Lagrangian experiments for particle tracing in atmosphere or ocean models and their analysis are a cornerstone of earth-system studies. They cover diverse study objectives such as the identification of pathways or source regions. Data for Lagrangian studies are generated by releasing virtual particles in one or in multiple locations of interest and simulating their advective-diffusive behavior backwards or forwards in time. Identifying main pathways connecting two regions of interest is often done by counting the trajectories that reach both regions. Here, the exact source and target region must be defined manually by a researcher. Manually defining the importance and exact location of these regions introduces a highly subjective perspective into the analysis. Additionally, to investigate all major target regions, all of them must be defined manually and the data must be analyzed accordingly. This human element slows down and complicates large scale analyses with many different sections and possible source areas.

We propose to significantly reduce the manual aspect by automatizing this process. To this end, we combine methods from different areas of machine learning and pattern mining into a sequence of steps. First, unsupervised methods, i.e., clustering, identify possible source areas on a randomized subset of the data. In a successive second step, supervised learning, i.e., classification, labels the positions along the trajectories according to their most probable source area using the previously automatically identified clusters as labels. The results of this approach can then be compared quantitatively to the results of analyses with manual definition of source areas and border-hitting-based labeling of the trajectories. Preliminary findings suggest that this approach could indeed help greatly to objectify and fasten the analysis process for Lagrangian Particle Release Experiments.

How to cite: Trahms, C., Handmann, P., Rath, W., Renz, M., and Visbeck, M.: Autonomous Assessment of Source Area Distributions for Sections in Lagrangian Particle Release Experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5631, https://doi.org/10.5194/egusphere-egu22-5631, 2022.

EGU22-5632 | Presentations | ITS2.6/AS5.1

Data-Driven Sentinel-2 Based Deep Feature Extraction to Improve Insect Species Distribution Models 

Joe Phillips, Ce Zhang, Bryan Williams, and Susan Jarvis

Despite being a vital part of ecosystems, insects are dying out at unprecedented rates across the globe. To help address this in the UK, UK Centre for Ecology & Hydrology (UKCEH) are creating a tool to utilise insect species distribution models (SDMs) for better facilitating future conservation efforts via volunteer-led insect tracking procedures. Based on these SDM models, we explored the inclusion of additional covariate information via 10-20m2 bands of temporally-aggregated Sentinel-2 data taken over the North of England in 2017 to improve the predictive performance. Here, we matched the 10-20m2 resolution of the satellite data to the coarse 1002 insect observation data via four methodologies of increasing complexity. First, we considered standard pixel-based approaches, performing aggregation by taking both the mean and standard deviation over the 10m2 pixels. Second, we explored object-based approaches to address the modifiable areal unit problem by applying the SNIC superpixels algorithm over the extent, with the mean and standard deviation of the pixels taken within each segment. The resulting dataset was then re-projected to a resolution of 100m2 by taking the modal values of the 10m2 pixels, which were provided with the aggregated values of their parent segment. Third, we took the UKCEH-created 2017 Land Cover Map (LCM) dataset and sampled 42,000, random 100m2 areas, evenly distributed about their modal land cover classes. We trained the U-Net Deep Learning model using the Sentinel-2 satellite images and LCM classes, by which data-driven features were extracted from the network over each 100m2 extent. Finally, as with the second approach, we used the superpixels segments instead as the units of analysis, sampling 21,000 segments, and taking the smallest bounding box around each of them. An attention-based U-Net was then adopted to mask each of the segments from their background and extract deep features. In a similar fashion to the second approach, we then re-projected the resulting dataset to a resolution of 100m2, taking the modal segment values accordingly. Using cross-validated AUCs over various species of moths and butterflies, we found that the object-based deep learning approach achieved the best accuracy when used with the SDMs. As such, we conclude that the novel approach of spatially aggregating satellite data via object-based, deep feature extraction has the potential to benefit similar, model-based aggregation needs and catalyse a step-change in ecological and environmental applications in the future.

How to cite: Phillips, J., Zhang, C., Williams, B., and Jarvis, S.: Data-Driven Sentinel-2 Based Deep Feature Extraction to Improve Insect Species Distribution Models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5632, https://doi.org/10.5194/egusphere-egu22-5632, 2022.

EGU22-5681 | Presentations | ITS2.6/AS5.1

AtmoDist as a new pathway towards quantifying and understanding atmospheric predictability 

Sebastian Hoffmann, Yi Deng, and Christian Lessig

The predictability of the atmosphere is a classical problem that has received much attention from both a theoretical and practical point of view. In this work, we propose to use a purely data-driven method based on a neural network to revisit the problem. The analysis is built upon the recently introduced AtmoDist network that has been trained on high-resolution reanalysis data to provide a probabilistic estimate of the temporal difference between given atmospheric fields, represented by vorticity and divergence. We define the skill of the network for this task as a new measure of atmospheric predictability, hypothesizing that the prediction of the temporal differences by the network will be more susceptible to errors when the atmospheric state is intrinsically less predictable. Preliminary results show that for short timescales (3-48 hours) one sees enhanced predictability in warm season compared to cool season over northern midlatitudes, and lower predictability over ocean compared to land. These findings support the hypothesis that across short timescales, AtmoDist relies on the recurrences of mesoscale convection with coherent spatiotemporal structures to connect spatial evolutions to temporal differences. For example, the prevalence of mesoscale convective systems (MCSs) over the central US in boreal warm season can explain the increase of mesoscale predictability there and oceanic zones marked by greater predictability corresponds well to regions of elevated convective activity such as the Pacific ITCZ. Given the dependence of atmospheric predictability on geographic location, season, and most importantly, timescales, we further apply the method to synoptic scales (2-10 days), where excitation and propagation of large-scale disturbances such as Rossby wave packets are expected to provide the connection between temporal and spatial differences. The design of the AtmoDist network is thereby adapted to the prediction range, for example, the size of the local patches that serve as input to AtmoDist is chosen based on the spatiotemporal atmospheric scales that provide the expected time and space connections.

By providing to the community a powerful, purely data-driven technique for quantifying, evaluating, and interpreting predictability, our work lays the foundation for efficiently detecting the existence of sub-seasonal to seasonal (S2S) predictability and, by further analyzing the mechanism of AtmoDist, understanding the physical origins, which bears major scientific and socioeconomic significances.

How to cite: Hoffmann, S., Deng, Y., and Lessig, C.: AtmoDist as a new pathway towards quantifying and understanding atmospheric predictability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5681, https://doi.org/10.5194/egusphere-egu22-5681, 2022.

EGU22-5746 | Presentations | ITS2.6/AS5.1

Model Output Statistics (MOS) and Machine Learning applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores 

Hervé Petetin, Dene Bowdalo, Pierre-Antoine Bretonnière, Marc Guevara, Oriol Jorba, Jan Mateu armengol, Margarida Samso Cabre, Kim Serradell, Albert Soret, and Carlos Pérez García-Pando

Air quality (AQ) forecasting systems are usually built upon physics-based numerical models that are affected by a number of uncertainty sources. In order to reduce forecast errors, first and foremost the bias, they are often coupled with Model Output Statistics (MOS) modules. MOS methods are statistical techniques used to correct raw forecasts at surface monitoring station locations, where AQ observations are available. In this study, we investigate to what extent AQ forecasts can be improved using a variety of MOS methods, including persistence (PERS), moving average (MA), quantile mapping (QM), Kalman Filter (KF), analogs (AN), and gradient boosting machine (GBM). We apply our analysis to the Copernicus Atmospheric Monitoring Service (CAMS) regional ensemble median O3 forecasts over the Iberian Peninsula during 2018–2019. A key aspect of our study is the evaluation, which is performed using a very comprehensive set of continuous and categorical metrics at various time scales (hourly to daily), along different lead times (1 to 4 days), and using different meteorological input data (forecast vs reanalyzed).

Our results show that O3 forecasts can be substantially improved using such MOS corrections and that this improvement goes much beyond the correction of the systematic bias. Although it typically affects all lead times, some MOS methods appear more adversely impacted by the lead time. When considering MOS methods relying on meteorological information and comparing the results obtained with IFS forecasts and ERA5 reanalysis, the relative deterioration brought by the use of IFS is minor, which paves the way for their use in operational MOS applications. Importantly, our results also clearly show the trade-offs between continuous and categorical skills and their dependencies on the MOS method. The most sophisticated MOS methods better reproduce O3 mixing ratios overall, with lowest errors and highest correlations. However, they are not necessarily the best in predicting the highest O3 episodes, for which simpler MOS methods can give better results. Although the complex impact of MOS methods on the distribution and variability of raw forecasts can only be comprehended through an extended set of complementary statistical metrics, our study shows that optimally implementing MOS in AQ forecast systems crucially requires selecting the appropriate skill score to be optimized for the forecast application of interest.

Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Armengol, J. M., Samso Cabre, M., Serradell, K., Soret, A., and Pérez Garcia-Pando, C.: Model Output Statistics (MOS) applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, Atmos. Chem. Phys. Discuss. [preprint], https://doi.org/10.5194/acp-2021-864, in review, 2021.

How to cite: Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Mateu armengol, J., Samso Cabre, M., Serradell, K., Soret, A., and Pérez García-Pando, C.: Model Output Statistics (MOS) and Machine Learning applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5746, https://doi.org/10.5194/egusphere-egu22-5746, 2022.

With the goal of developing a data-driven parameterization of unresolved gravity waves (GW) momentum transport for use in general circulation models (GCMs), we investigate neural network architectures that emulate the Alexander-Dunkerton 1999 (AD99) scheme, an existing physics-based GW parameterization. We analyze the distribution of errors as functions of shear-related metrics in an effort to diagnose the disparity between online and offline performance of the trained emulators, and develop a sampling algorithm to treat biases on the tails of the distribution without adversely impacting mean performance. 

It has been shown in previous efforts [1] that stellar offline performance does not necessarily guarantee adequate online performance, or even stability. Error analysis reveals that the majority of the samples are learned quickly, while some stubborn samples remain poorly represented. We find that the more error-prone samples are those with wind profiles that have large shears– this is consistent with physical intuition as gravity waves encounter a wider range of critical levels when experiencing large shear;  therefore parameterizing gravity waves for these samples is a more difficult, complex task. To remedy this, we develop a sampling strategy that performs a parameterized histogram equalization, a concept borrowed from 1D optimal transport. 

The sampling algorithm uses a linear mapping from the original histogram to a more uniform histogram parameterized by $t \in [0,1]$, where $t=0$ recovers the original distribution and $t=1$ enforces a completely uniform distribution. A given value $t$ assigns each bin a new probability which we then use to sample from each bin. If the new probability is smaller than the original, then we invoke sampling without replacement, but limited to a reduced number consistent with the new probability. If the new probability is larger than the original, then we repeat all the samples in the bin up to some predetermined maximum repeat value (a threshold to avoid extreme oversampling at the tails). We optimize this sampling algorithm with respect to $t$, the maximum repeat value, and the number and distribution (uniform or not) of the histogram bins. The ideal combination of those parameters yields errors that are closer to a constant function of the shear metrics while maintaining high accuracy over the whole dataset. Although we study the performance of this algorithm in the context of training a gravity wave parameterization emulator, this strategy can be used for learning datasets with long tail distributions where the rare samples are associated with low accuracy. Instances of this type of datasets are prevalent in earth system dynamics: launching of gravity waves, and extreme events like hurricanes, heat waves are just a few examples. 

[1] Espinosa, Z. I., A. Sheshadri, G. R. Cain, E. P. Gerber, and K. J. DallaSanta, 2021: A Deep Learning Parameterization of Gravity Wave Drag Coupled to an Atmospheric Global Climate Model,Geophys. Res. Lett., in review. [https://edwinpgerber.github.io/files/espinosa_etal-GRL-revised.pdf]

How to cite: Yang, L. and Gerber, E.: Sampling strategies for data-driven parameterization of gravity wave momentum transport, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5766, https://doi.org/10.5194/egusphere-egu22-5766, 2022.

EGU22-5980 | Presentations | ITS2.6/AS5.1 | Highlight

Probabilistic forecasting of heat waves with deep learning 

George Miloshevich, Valerian Jacques-Dumas, Pierre Borgnat, Patrice Abry, and Freddy Bouchet
Extreme events such as storms, floods, cold spells and heat waves are expected to have an increasing societal impact with climate change. However the study of rare events is complicated due to computational costs of highly complex models and lack of observations. However, with the help of machine learning synthetic models for forecasting can be constructed and cheaper resampling techniques can be developed. Consequently, this may also clarify more regional impacts of climate change. .

In this work, we perform detailed analysis of how deep neural networks (DNNs) can be used in intermediate-range forecasting of prolonged heat waves of duration of several weeks over synoptic spatial scales. In particular, we train a convolutional neural network (CNN) on the 7200 years of a simulation of a climate model. As such, we are interested in probabilistic prediction (committor function in transition theory). Thus we discuss the proper forecasting scores such as Brier skill score, which is popular in weather prediction, and cross-entropy skill, which is based on information-theoretic considerations. They allow us to measure the success of various architectures and investigate more efficient pipelines to extract the predictions from physical observables such as geopotential, temperature and soil moisture. A priori, the committor is hard to visualize as it is a high dimensional function of its inputs, the grid points of the climate model for a given field. Fortunately, we can construct composite maps conditioned to its values which reveal that the CNN is likely relying on the global teleconnection patterns of geopotential. On the other hand, soil moisture signal is more localized with predictive capability over much longer times in future (at least a month). The latter fact relates to the soil-atmosphere interactions. One expects the performance of DNNs to greatly improve with more data. We provide quantitative assessment of this fact. In addition, we offer more details on how the undersampling of negative events affects the knowledge of the committor function. We show that transfer learning helps ensure that the committor is a smooth function along the trajectory. This will be an important quality when such a committor will be applied in rare event algorithms for importance sampling. 
 
While DNNs are universal function approximators the issue of extrapolation can be somewhat problematic. In addressing this question we train a CNN on a dataset generated from a simulation without a diurnal cycle, where the feedbacks between soil moisture and heat waves appear to be significantly stronger. Nevertheless, when the CNN with the given weights is validated on a dataset generated from a simulation with a daily cycle the predictions seem to generalize relatively well, despite a small reduction in skill. This generality validates the approach. 
 

How to cite: Miloshevich, G., Jacques-Dumas, V., Borgnat, P., Abry, P., and Bouchet, F.: Probabilistic forecasting of heat waves with deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5980, https://doi.org/10.5194/egusphere-egu22-5980, 2022.

EGU22-6479 | Presentations | ITS2.6/AS5.1

Parameter inference and uncertainty quantification for an intermediate complexity climate model 

Benedict Roeder, Jakob Schloer, and Bedartha Goswami

Well-adapted parameters in climate models are essential to make accurate predictions
for future projections. In climate science, the record of precise and comprehensive obser-
vational data is rather short and parameters of climate models are often hand-tuned or
learned from artificially generated data. Due to limited and noisy data, one wants to use
Bayesian models to have access to uncertainties of the inferred parameters. Most popu-
lar algorithms for learning parameters from observational data like the Kalman inversion
approach only provide point estimates of parameters.
In this work, we compare two Bayesian parameter inference approaches applied to the
intermediate complexity model for the El Niño-Southern Oscillation by Zebiak & Cane. i)
The "Calibrate, Emulate, Sample" (CES) approach, an extension of the ensemble Kalman
inversion which allows posterior inference by emulating the model via Gaussian Processes
and thereby enables efficient sampling. ii) The simulation-based inference (SBI) approach
where the approximate posterior distribution is learned from simulated model data and
observational data using neural networks.
We evaluate the performance of both approaches by comparing their run times and the
number of required model evaluations, assess the scalability with respect to the number
of inference parameters, and examine their posterior distributions.

How to cite: Roeder, B., Schloer, J., and Goswami, B.: Parameter inference and uncertainty quantification for an intermediate complexity climate model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6479, https://doi.org/10.5194/egusphere-egu22-6479, 2022.

EGU22-6553 | Presentations | ITS2.6/AS5.1

Can simple machine learning methods predict concentrations of OH better than state of the art chemical mechanisms? 

Sebastian Hickman, Paul Griffiths, James Weber, and Alex Archibald

Concentrations of the hydroxyl radical, OH, control the lifetime of methane, carbon monoxide and other atmospheric constituents.  The short lifetime of OH, coupled with the spatial and temporal variability in its sources and sinks, makes accurate simulation of its concentration particularly challenging. To date, machine learning (ML) methods have been infrequently applied to global studies of atmospheric chemistry.

We present an assessment of the use of ML methods for the challenging case of simulation of the hydroxyl radical at the global scale, and show that several approaches are indeed viable.  We use observational data from the recent NASA Atmospheric Tomography Mission to show that machine learning methods are comparable in skill to state of the art forward chemical models and are capable, if appropriately applied, of simulating OH to within observational uncertainty.  

We show that a simple ridge regression model is a better predictor of OH concentrations in the remote atmosphere than a state of the art chemical mechanism implemented in a forward box model. Our work shows that machine learning may be an accurate emulator of chemical concentrations in atmospheric chemistry, which would allow a significant speed up in climate model runtime due to the speed and efficiency of simple machine learning methods. Furthermore, we show that relatively few predictors are required to simulate OH concentrations, suggesting that the variability in OH can be quantitatively accounted for by few observables with the potential to simplify the numerical simulation of atmospheric levels of key species such as methane. 

How to cite: Hickman, S., Griffiths, P., Weber, J., and Archibald, A.: Can simple machine learning methods predict concentrations of OH better than state of the art chemical mechanisms?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6553, https://doi.org/10.5194/egusphere-egu22-6553, 2022.

EGU22-6674 | Presentations | ITS2.6/AS5.1

The gravity wave parameterization calibration problem: A 1D QBO model testbed 

Ofer Shamir, L. Minah Yang, David S. Connelly, and Edwin P. Gerber

An essential step in implementing any new parameterization is calibration, where the parameterization is adjusted to work with an existing model and yield some desired improvement. In the context of gravity wave (GW) momentum transport, calibration is necessitated by the facts that: (i) Some GWs are always at least partially resolved by the model, and hence a parameterization should only account for the missing waves. Worse, the parameterization may need to correct for the misrepresentation of under-resolved GWs, i.e., coarse vertical resolution can bias GW breaking level, leading to erroneous momentum forcing. (ii) The parameterized waves depend on the resolved solution for both their sources and dissipation, making them susceptible to model biases. Even a "perfect" parameterization could then yield an undesirable result, e.g., an unrealistic Quasi-Biennial Oscillation (QBO).  While model-specific calibration is required, one would like a general "recipe" suitable for most models. From a practical point of view, the adoption of a new parameterization will be hindered by a too-demanding calibration process. This issue is of particular concern in the context of data-driven methods, where the number of tunable degrees of freedom is large (possibly in the millions). Thus, more judicious ways for addressing the calibration step are required. 

To address the above issues, we develop a 1D QBO model, where the "true" gravity wave momentum deposition is determined from a source distribution and critical level breaking, akin to a traditional physics-based GW parameterization. The control parameters associated with the source consist of the total wave flux (related to the total precipitation for convectively generated waves) and the spectrum width (related to the depth of convection). These parameters can be varied to mimic the variability in GW sources between different models, i.e., biases in precipitation variability. In addition, the model’s explicit diffusivity and vertical advection can be varied to mimic biases in model numerics and circulation, respectively. The model thus allows us to assess the ability of a data-driven parameterization to (i) extrapolate, capturing the response of GW momentum transport to a change in the model parameters and (ii) be calibrated, adjusted to maintain the desired simulation of the QBO in response to a change in the model parameters. The first property is essential for a parameterization to be used for climate prediction, the second, for a parameterization to be used at all. We focus in particular on emulators of the GW momentum transport based on neural network and regression trees, contrasting their ability to satisfy both of these goals.  

 

How to cite: Shamir, O., Yang, L. M., Connelly, D. S., and Gerber, E. P.: The gravity wave parameterization calibration problem: A 1D QBO model testbed, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6674, https://doi.org/10.5194/egusphere-egu22-6674, 2022.

All oceanic general circulation models (GCMs) include parametrizations of the unresolved subgrid-scale (eddy) effects on the large-scale motions, even at the (so-called) eddy-permitting resolutions. Among the many problems associated with the development of accurate and efficient eddy parametrizations, one problem is a reliable decomposition of a turbulent flow into resolved and unresolved (subgrid) scale components. Finding an objective way to separate eddies is a fundamental, critically important and unresolved problem. 
Here a statistically consistent correlation-based flow decomposition method (CBD) that employs the Gaussian filtering kernel with geographically varying topology – consistent with the observed local spatial correlations – achieves the desired scale separation. CBD is demonstrated for an eddy-resolving solution of the classical midlatitude double-gyre quasigeostrophic (QG) circulation, that possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. CBD facilitates a comprehensive analysis of the feedbacks of eddies on the large-scale flow via the transient part of the eddy forcing. A  `product integral' based on time-lagged correlation between the diagnosed eddy forcing and the evolving large-scale flow, uncovers robust `eddy backscatter' mechanism. Data-driven augmentation of non-eddy-resolving ocean model by stochastically-emulated eddy fields allows to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres.

  • N. Argawal, Ryzhov, E.A., Kondrashov, D., and P.S. Berloff, 2021: Correlation-based flow decomposition and statistical analysis of the eddy forcing, Journal of Fluid Mechanics, 924, A5. doi:10.1017/jfm.2021.604

  • N. Argawal, Kondrashov, D., Dueben, P., Ryzhov, E.A., and P.S. Berloff, 2021: A comparison of data-driven approaches to build low-dimensional ocean modelsJournal of Advances in Modelling Earth Systems, doi:10.1029/2021MS002537

 

How to cite: Kondrashov, D.: Towards physics-informed stochastic parametrizations of subgrid physics in ocean models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6859, https://doi.org/10.5194/egusphere-egu22-6859, 2022.

EGU22-7044 | Presentations | ITS2.6/AS5.1

Seismic Event Characterization using Manifold Learning Methods 

Yuri Bregman, Yochai Ben Horin, Yael Radzyner, Itay Niv, Maayan Kahlon, and Neta Rabin

Manifold learning is a branch of machine learning that focuses on compactly representing complex data-sets based on their fundamental intrinsic parameters. One such method is diffusion maps, which reduces the dimension of the data while preserving its geometric structure. In this work, diffusion maps are applied to several seismic event characterization tasks. The first task is automatic earthquake-explosion discrimination, which is an essential component of nuclear test monitoring. We also use this technique to automatically identify mine explosions and aftershocks following large earthquakes. Identification of such events helps to lighten the analysts’ burden and allow for timely production of reviewed seismic bulletins.

The proposed methods begin with a pre-processing stage in which a time–frequency representation is extracted from each seismogram while capturing common properties of seismic events and overcoming magnitude differences. Then, diffusion maps are used in order to construct a low-dimensional model of the original data. In this new low-dimensional space, classification analysis is carried out.

The algorithm’s discrimination performance is demonstrated on several seismic data sets. For instance, using the seismograms from EIL station, we identify arrivals that were caused by explosions at the nearby Eshidiya mine in Jordan. The model provides a visualization of the data, organized by its intrinsic factors. Thus, along with the discrimination results, we provide a compact organization of the data that characterizes the activity patterns in the mine.

Our results demonstrate the potential and strength of the manifold learning based approach, which may be suitable to other in other geophysics domains.

How to cite: Bregman, Y., Ben Horin, Y., Radzyner, Y., Niv, I., Kahlon, M., and Rabin, N.: Seismic Event Characterization using Manifold Learning Methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7044, https://doi.org/10.5194/egusphere-egu22-7044, 2022.

Accurate streamflow forecasts can provide guidance for reservoir managements, which can regulate river flows, manage water resources and mitigate flood damages. One popular way to forecast streamflow is to use bias-corrected meteorological forecasts to drive a calibrated hydrological model. But for cascade reservoirs, such approaches suffer significant deficiencies because of the difficulty to simulate reservoir operations by physical approach and the uncertainty of meteorological forecasts over small catchment. Another popular way is to forecast streamflow with machine learning method, which can fit a statistical model without inputs like reservoir operating rules. Thus, we integrate meteorological forecasts, land surface hydrological model and machine learning to forecast hourly streamflow over the Yantan catchment, which is one of the cascade reservoirs in the Hongshui River with streamflow influenced by both the upstream reservoir water release and the rainfall runoff process within the catchment.

Before evaluating the streamflow forecast system, it is necessary to investigate the skill by means of a series of specific hindcasts that isolate potential sources of predictability, like meteorological forcing and the initial condition (IC). Here, we use ensemble streamflow prediction (ESP)/reverse ESP (revESP) method to explore the impact of IC on hourly stream prediction. Results show that the effect of IC on runoff prediction is 16 hours. In the next step, we evaluate the hourly streamflow hindcasts during the rainy seasons of 2013-2017 performed by the forecast system. We use European Centre for Medium-Range Weather Forecasts perturbed forecast forcing from the THORPEX Interactive Grand Global Ensemble (TIGGE-ECMWF) as meteorological inputs to perform the hourly streamflow hindcasts. Compared with the ESP, the hydrometeorological ensemble forecast approach reduces probabilistic and deterministic forecast errors by 6% during the first 7 days. After integrated the long short-term memory (LSTM) deep learning method into the system, the deterministic forecast error can be further reduced by 6% in the first 72 hours. We also use historically observed streamflow to drive another LSTM model to perform an LSTM-only streamflow forecast. Results show that its skill sharply dropped after the first 24 hours, which indicates that the meteorology-hydrology modeling approach can improve the streamflow forecast.

How to cite: Liu, J. and Yuan, X.: Reservoir inflow forecast by combining meteorological ensemble forecast, physical hydrological simulation and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7093, https://doi.org/10.5194/egusphere-egu22-7093, 2022.

EGU22-7113 | Presentations | ITS2.6/AS5.1 | Highlight

Coupling regional air quality simulations of EURAD-IM with street canyon observations - a machine learning approach 

Charlotte Neubacher, Philipp Franke, Alexander Heinlein, Axel Klawonn, Astrid Kiendler-Scharr, and Anne-Caroline Lange

State of the art atmospheric chemistry transport models on regional scales as the EURAD-IM (EURopean Air pollution Dispersion-Inverse Model) simulate physical and chemical processes in the atmosphere to predict the dispersion of air pollutants. With EURAD-IM’s 4D-var data assimilation application, detailed analyses of the air quality can be conducted. These analyses allow for improvements of atmospheric chemistry forecast as well as emission source strength assessments. Simulations of EURAD-IM can be nested to a spatial resolution of 1 km, which does not correspond to the urban scale. Thus, inner city street canyon observations cannot be exploited since here, anthropogenic pollution vary vastly over scales of 100 m or less.

We address this issue by implementing a machine learning (ML) module into EURAD-IM, forming a hybrid model that enable bridging the representativeness gap between model resolution and inner-city observations. Thus, the data assimilation of EURAD-IM is strengthened by additional observations in urban regions. Our approach of the ML module is based on a neural network (NN) with relevant environmental information of street architecture, traffic density, meteorology, and atmospheric pollutant concentrations from EURAD-IM as well as the street canyon observation of pollutants as input features. The NN then maps the observed concentration from street canyon scale to larger spatial scales.

We are currently working with a fully controllable test environment created from EURAD-IM forecasts of the years 2020 and 2021 at different spatial resolutions. Here, the ML model maps the high-resolution hourly NO2 concentration to the concentration of the low resolution model grid. It turns out that it is very difficult for NNs to learn the hourly concentrations with equal accuracy using diurnal cycles of pollutant concentrations. Thus, we develop a model that uses an independent NN for each hour to support time-of-day learning. This allows to reduce the training error by a factor of 102. As a proof of concept, we trained the ML model in an overfitting regime where the mean squared training error reduce to 0.001% for each hour. Furthermore, by optimizing the hyperparameters and introducing regularization terms to reduce the overfitting, we achieved a validation error of 9−12% during night and 9−16% during day.

How to cite: Neubacher, C., Franke, P., Heinlein, A., Klawonn, A., Kiendler-Scharr, A., and Lange, A.-C.: Coupling regional air quality simulations of EURAD-IM with street canyon observations - a machine learning approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7113, https://doi.org/10.5194/egusphere-egu22-7113, 2022.

EGU22-7135 | Presentations | ITS2.6/AS5.1 | Highlight

How to calibrate a climate model with neural network based physics? 

Blanka Balogh, David Saint-Martin, and Aurélien Ribes

Unlike the traditional subgrid scale parameterizations used in climate models, current neural network (NN) parameterizations are only tuned offline, by minimizing a loss function on outputs from high resolution models. This approach often leads to numerical instabilities and long-term biases. Here, we propose a method to design tunable NN parameterizations and calibrate them online. The calibration of the NN parameterization is achieved in two steps. First, some model parameters are included within the NN model input. This NN model is fitted at once for a range of values of the parameters, using an offline metric. Second, once the NN parameterization has been plugged into the climate model, the parameters included among the NN inputs are optimized with respect to an online metric quantifying errors on long-term statistics. We illustrate our method with two simple dynamical systems. Our approach significantly reduces long-term biases of the climate model with NN based physics.

How to cite: Balogh, B., Saint-Martin, D., and Ribes, A.: How to calibrate a climate model with neural network based physics?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7135, https://doi.org/10.5194/egusphere-egu22-7135, 2022.

EGU22-8279 | Presentations | ITS2.6/AS5.1

Using deep learning to improve the spatial resolution of the ocean model 

Ihor Hromov, Georgy Shapiro, Jose Ondina, Sanjay Sharma, and Diego Bruciaferri

For the ocean models, the increase of spatial resolution is a matter of significant importance and thorough research. Computational resources limit our capabilities of the increase in model resolution. This constraint is especially true for the traditional dynamical models, for which an increase of a factor of two in the horizontal resolution results in simulation times increased approximately tenfold. One of the potential methods to relax this limitation is to use Artificial Intelligence methods, such as Neural Networks (NN). In this research, NN is applied to ocean circulation modelling. More specifically, NN is used on data output from the dynamical model to increase the spatial resolution of the model output. The main dataset being used is Sea Surface Temperature data in 0.05- and 0.02-degree horizontal resolutions for Irish Sea. 

Several NN architectures were applied to address the task. Generative Adversarial Networks (GAN), Convolutional Neural Networks (CNN) and Multi-level Wavelet CNN. They are used in other areas of knowledge in problems related to the increase of resolution. The work will contrast and compare the efficiency of and present a provisional assessment of the efficiency of each of the methods. 

How to cite: Hromov, I., Shapiro, G., Ondina, J., Sharma, S., and Bruciaferri, D.: Using deep learning to improve the spatial resolution of the ocean model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8279, https://doi.org/10.5194/egusphere-egu22-8279, 2022.

EGU22-8334 | Presentations | ITS2.6/AS5.1

Information theory solution approach for air-pollution sensors' location-allocation problem 

Barak Fishbain, Ziv Mano, and Shai Kendler

Urbanization and industrialization processes are accompanied by adverse environmental effects, such as air pollution. The first action in reducing air pollution is the detection of its source(s). This is achievable through monitoring. When deploying a sensor array, one must balance between the array's cost and performance. This optimization problem is known as the location-allocation problem. Here, a new solution approach, which draws its foundation from information theory is presented. The core of the method is air-pollution levels computed by a dispersion model in various meteorological conditions. The sensors are then placed in the locations which information theory identifies as the most uncertain. The method is compared with two other heuristics typically applied for solving the location-allocation problem. In the first, sensors are randomly deployed, in the second, the sensors are placed according to the maximal cumulative pollution levels (i.e., hot spot). For the comparison two simulated scenes were evaluated, one contains point sources and buildings, and the other also contains line sources (i.e., roads). It shows that the Entropy method resulted in a superior sensors' deployment compared to the other two approaches in terms of source apportionment and dense pollution field reconstruction from the sensors' network measurements.

How to cite: Fishbain, B., Mano, Z., and Kendler, S.: Information theory solution approach for air-pollution sensors' location-allocation problem, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8334, https://doi.org/10.5194/egusphere-egu22-8334, 2022.

EGU22-8719 | Presentations | ITS2.6/AS5.1

Multi-station Multivariate Multi-step Convection Nowcasting with Deep Neural Networks 

Sandy Chkeir, Aikaterini Anesiadou, and Riccardo Biondi

Extreme weather nowcasting has always been a challenging task in meteorology. Many research studies have been conducted to accurately forecast extreme weather events, related to rain rates and/or wind speed thresholds, in spatio-temporal scales. Over decades, this field gained attention in the artificial intelligence community which is aiming towards creating more accurate models using the latest algorithms and methods.  

In this work, within the H2020 SESAR ALARM project, we aim to nowcast rain and wind speed as target features using different input configurations of the available sources such as weather stations, lightning detectors, radar, GNSS receivers, radiosonde and radio occultations data. This nowcasting task has been firstly conducted at 14 local stations around Milano Malpensa Airport as a short-term temporal multi-step forecasting. At a second step, all stations will be combined, meaning that the forecasting becomes a spatio-temporal problem. Concretely, we want to investigate the predicted rain and wind speed values using the different inputs for two case scenarios: for each station, and joining all stations together. 

The chaotic nature of the atmosphere, e.g. non-stationarity of the driving series of each weather feature, makes the predictions unreliable and inaccurate and thus dealing with these data is a very delicate task. For this reason, we have devoted some work to cleaning, feature engineering and preparing the raw data before feeding them into the model architectures. We have managed to preprocess large amounts of data for local stations around the airport, and studied the feasibility of nowcasting rain and wind speed targets using different data sources altogether. The temporal multivariate driving series have high dimensionality and we’ve  made multi-step predictions for the defined target functions.

We study and test different machine learning architectures starting from simple multi-layer perceptrons to convolutional models, and Recurrent Neural Networks (RNN) for temporal and spatio-temporal nowcasting. The Long Short-Term Memory (LSTM) encoder decoder architecture outperforms other models achieving more accurate predictions for each station separately.  Furthermore, to predict the targets in a spatio-temporal scale, we will deploy a 2-layer spatio-temporal stacked LSTM model consisting of independent LSTM models per location in the first LSTM layer, and another LSTM layer to finally predict targets for multi-steps ahead. And the results obtained with different algorithm architectures applied to a dense network of sensors are to be reported.

How to cite: Chkeir, S., Anesiadou, A., and Biondi, R.: Multi-station Multivariate Multi-step Convection Nowcasting with Deep Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8719, https://doi.org/10.5194/egusphere-egu22-8719, 2022.

EGU22-8852 | Presentations | ITS2.6/AS5.1

Time-dependent Hillshades: Dispelling the Shadow Curse of Machine Learning Applications in Earth Observation 

Freddie Kalaitzis, Gonzalo Mateo-Garcia, Kevin Dobbs, Dolores Garcia, Jason Stoker, and Giovanni Marchisio

We show that machine learning models learn and perform better when they know where to expect shadows, through hillshades modeled to the time of imagery acquisition.

Shadows are detrimental to all machine learning applications on satellite imagery. Prediction tasks like semantic / instance segmentation, object detection, counting of rivers, roads, buildings, trees, all rely on crisp edges and colour gradients that are confounded by the presence of shadows in passive optical imagery, which rely on the sun’s illumination for reflectance values.

Hillshading is a standard technique for enriching a mapped terrain with relief effects, which is done by emulating the shadow caused by steep terrain and/or tall vegetation. A hillshade that is modeled to the time of day and year can be easily derived through a basic form of ray tracing on a Digital Terrain Model (DTM) (also known as a bare-earth DEM) or Digital Surface Model (DSM) given the sun's altitude and azimuth angles. In this work, we use lidar-derived DSMs. A DSM-based hillshade conveys a lot more information on shadows than a bare-earth DEM alone, namely any non-terrain vertical features (e.g. vegetation, buildings) resolvable at a 1-m resolution. The use of this level of fidelity of DSM for hillshading and its input to a machine learning model is novel and the main contribution of our work. Any uncertainty over the angles can be captured through a composite multi-angle hillshade, which shows the range where shadows can appear throughout the day.

We show the utility of time-dependent hillshades in the daily mapping of rivers from Very High Resolution (VHR) passive optical and lidar-derived terrain data [1]. Specifically, we leverage the acquisition timestamps within a daily 3m PlanetScope product over a 2-year period. Given a datetime and geolocation, we model the sun’s azimuth and elevation relative to that geolocation at that time of day and year. We can then generate a time-dependent hillshade and therefore locate shadows in any given time within that 2-year period. In our ablation study we show that, out of all the lidar-derived products, the time-dependent hillshades contribute a 8-9% accuracy improvement in the semantic segmentation of rivers. This indicates that a semantic segmentation machine learning model is less prone to errors of commission (false positives), by better disambiguating shadows from dark water.

Time-dependent hillshades are not currently used in ML for EO use-cases, yet they can be useful. All that is needed to produce them is access to high-resolution bare-earth DEMs, like that of the US National 3D Elevation Program covering the entire continental U.S at 1-meter resolution, or creation of DSMs from the lidar point cloud data itself. As the coverage of DSM and/or DEM products expands to more parts of the world, time-dependent hillshades could become as commonplace as cloud masks in EO use cases.


[1] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Freddie Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion. AI for Earth Sciences Workshop, NeurIPS 2020

How to cite: Kalaitzis, F., Mateo-Garcia, G., Dobbs, K., Garcia, D., Stoker, J., and Marchisio, G.: Time-dependent Hillshades: Dispelling the Shadow Curse of Machine Learning Applications in Earth Observation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8852, https://doi.org/10.5194/egusphere-egu22-8852, 2022.

EGU22-9348 | Presentations | ITS2.6/AS5.1

Data-driven modelling of soil moisture: mapping organic soils 

Doran Khamis, Matt Fry, Hollie Cooper, Ross Morrison, and Eleanor Blyth

Improving our understanding of soil moisture and hydraulics is crucial for flood prediction, smart agriculture, modelling nutrient and pollutant spread and evaluating the role of land as a sink or source of carbon and other greenhouse gases. State of the art land surface models rely on poorly-resolved soil textural information to parametrise arbitrarily layered soil models; soils rich in organic matter – key to understanding the role of the land in achieving net zero carbon – are not well modelled. Here, we build a predictive data-driven model of soil moisture using a neural network composed of transformer layers to process time series data from point-sensors (precipitation gauges and sensor-derived estimates of potential evaporation) and convolutional layers to process spatial atmospheric driving data and contextual information (topography, land cover and use, location and catchment behaviour of water bodies). We train the model using data from the COSMOS-UK sensor network and soil moisture satellite products and compare the outputs with JULES to investigate where and why the models diverge. Finally, we predict regions of high peat content and propose a way to combine theory with our data-driven approach to move beyond the sand-silt-clay modelling framework.

How to cite: Khamis, D., Fry, M., Cooper, H., Morrison, R., and Blyth, E.: Data-driven modelling of soil moisture: mapping organic soils, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9348, https://doi.org/10.5194/egusphere-egu22-9348, 2022.

EGU22-9452 | Presentations | ITS2.6/AS5.1

Eddy identification from along track altimeter data using deep learning: EDDY project 

Adili Abulaitijiang, Eike Bolmer, Ribana Roscher, Jürgen Kusche, Luciana Fenoglio, and Sophie Stolzenberger

Eddies are circular rotating water masses, which are usually generated near the large ocean currents, e.g., Gulf Stream. Monitoring eddies and gaining knowledge on eddy statistics over a large region are important for fishery, marine biology studies, and testing ocean models.

At mesoscale, eddies are observed in radar altimetry, and methods have been developed to identify, track and classify them in gridded maps of sea surface height derived from multi-mission data sets. However, this procedure has drawbacks since much information is lost in the gridded maps. Inevitably, the spatial and temporal resolution of the original altimetry data degrades during the gridding process. On the other hand, the task of identifying eddies has been a post-analysis process on the gridded dataset, which is, by far, not meaningful for near-real time applications or forecasts. In the EDDY project at the University of Bonn, we aim to develop methods for identifying eddies directly from along track altimetry data via a machine (deep) learning approach.

At the early stage of the project, we started with gridded altimetry maps to set up and test the machine learning algorithm. The gridded datasets are not limited to multi-mission gridded maps from AVISO, but also include the high resolution (~6 km) ocean modeling simulation dataset (e.g., FESOM, Finite Element Sea ice Ocean Model). Later, the gridded maps are sampled along the real altimetry ground tracks to obtain the single-track altimetry data. Reference data, as the training set for machine learning, will be produced by open-source geometry-based approach (e.g., py-eddy-tracker, Mason et al., 2014) with additional constraints like Okubo-Weiss parameter and Sea Surface Temperature (SST) profile signatures.

In this presentation, we introduce the EDDY project and show the results from the machine learning approach based on gridded datasets for the Gulf stream area for the period 2017, and first results of single-track eddy identification in the region.

How to cite: Abulaitijiang, A., Bolmer, E., Roscher, R., Kusche, J., Fenoglio, L., and Stolzenberger, S.: Eddy identification from along track altimeter data using deep learning: EDDY project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9452, https://doi.org/10.5194/egusphere-egu22-9452, 2022.

DINCAE (Data INterpolating Convolutional Auto-Encoder) is a neural network to reconstruct missing data (e.g. obscured by clouds or gaps between tracks) in satellite data. Contrary to standard image reconstruction (in-painting) with neural networks, this application requires a method to handle missing data (or data with variable accuracy) already in the training phase. Instead of using a cost function based on the mean square error, the neural network (U-Net type of network) is optimized by minimizing the negative log likelihood assuming a Gaussian distribution (characterized by a mean and a variance). As a consequence, the neural network also provides an expected error variance of the reconstructed field (per pixel and per time instance).

 

In this updated version DINCAE 2.0, the code was rewritten in Julia and a new type of skip connection has been implemented which showed superior performance with respect to the previous version. The method has also been extended to handle multivariate data (an example will be shown with sea-surface temperature, chlorophyll concentration and wind fields). The improvement of this network is demonstrated in the Adriatic Sea. 

 

Convolutional networks work usually with gridded data as input. This is however a limitation for some data types used in oceanography and in Earth Sciences in general, where observations are often irregularly sampled.  The first layer of the neural network and the cost function have been modified so that unstructured data can also be used as inputs to obtain gridded fields as output. To demonstrate this, the neural network is applied to along-track altimetry data in the Mediterranean Sea. Results from a 20-year reconstruction are presented and validated. Hyperparameters are determined using Bayesian optimization and minimizing the error relative to a development dataset.

How to cite: Barth, A., Alvera-Azcárate, A., Troupin, C., and Beckers, J.-M.: A multivariate convolutional autoencoder to reconstruct satellite data with an error estimate based on non-gridded observations: application to sea surface height, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9578, https://doi.org/10.5194/egusphere-egu22-9578, 2022.

EGU22-9734 | Presentations | ITS2.6/AS5.1

High Impact Weather Forecasts in Southern Brazil using Ensemble Precipitation Forecasts and Machine Learning 

Cesar Beneti, Jaqueline Silveira, Leonardo Calvetti, Rafael Inouye, Lissette Guzman, Gustavo Razera, and Sheila Paz

In South America, southern parts of Brazil, Paraguay and northeast Argentina are regions particularly prone to high impact weather (intensive lightning activity, high precipitation, hail, flash floods and occasional tornadoes), mostly associated with extra-tropical cyclones, frontal systems and Mesoscale Convective Systems. In the south of Brazil, agricultural industry and electrical power generation are the main economic activities. This region is responsible for 35% of all hydro-power energy production in the country, with long transmission lines to the main consumer regions, which are severely affected by these extreme weather conditions. Intense precipitation events are a common cause of electricity outages in southern Brazil, which ranks as one of the regions in Brazil with the highest annual lightning incidence, as well. Accurate precipitation forecasts can mitigate this kind of problem. Despite improvements in the precipitation estimates and forecasts, some difficulties remain to increase the accuracy, mainly related to the temporal and spatial location of the events. Although several options are available, it is difficult to identify which deterministic forecast is the best or the most reliable forecast. Probabilistic products from large ensemble prediction systems provide a guide to forecasters on how confident they should be about the deterministic forecast, and one approach is using post processing methods such as machine learning (ML), which has been used to identify patterns in historical data to correct for systematic ensemble biases.

In this paper, we present a study, in which we used 20 members from the Global Ensemble Forecast System (GEFS) and 50 members from European Centre for Medium-Range Weather Forecasts (ECMWF)  during 2019-2021,  for seven daily precipitation thresholds: 0-1.0mm, 1.0mm-15mm, 15mm-40mm, 40mm-55mm, 55mm-105mm, 105mm-155mm and over 155mm. A ML algorithm was developed for each day, up to 15 days of forecasts, and several skill scores were calculated, for these daily precipitation thresholds. Initially, to select the best members of the ensembles, a gradient boosting algorithm was applied, in order to improve the skill of the model and reduce processing time. After preprocessing the data, a random forest classifier was used to train the model. Based on hyperparameter sensitivity tests, the random forest required 500 trees, a maximum tree depth of 12 levels, at least 20 samples per leaf node, and the minimization of entropy for splits. In order to evaluate the models, we used a cross-validation on a limited data sample. The procedure has a single parameter that refers to the number of groups that a given data sample is to be split into. In our work we created a twenty-six fold cross validation with 30 days per fold to verify the forecasts. The results obtained by the RF were evaluated through estimated value versus observed value. For the forecast range, we found values above 75% for the precision metrics in the first 3 days, and around 68% in the next days. The recall was also around 80% throughout the entire forecast range,  with promising results to apply this technique operationally, which is our intent in the near future. 

How to cite: Beneti, C., Silveira, J., Calvetti, L., Inouye, R., Guzman, L., Razera, G., and Paz, S.: High Impact Weather Forecasts in Southern Brazil using Ensemble Precipitation Forecasts and Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9734, https://doi.org/10.5194/egusphere-egu22-9734, 2022.

EGU22-9833 | Presentations | ITS2.6/AS5.1

Deep learning for laboratory earthquake prediction and autoregressive forecasting of fault zone stress 

Laura Laurenti, Elisa Tinti, Fabio Galasso, Luca Franco, and Chris Marone

Earthquakes forecasting and prediction have long, and in some cases sordid, histories but recent work has rekindled interest in this area based on advances in short-term early warning, hazard assessment for human induced seismicity and successful prediction of laboratory earthquakes.

In the lab, frictional stick-slip events provide an analog for the full seismic cycle and such experiments have played a central role in understanding the onset of failure and the dynamics of earthquake rupture. Lab earthquakes are also ideal targets for machine learning (ML) techniques because they can be produced in long sequences under a wide range of controlled conditions. Indeed, recent work shows that labquakes can be predicted from fault zone acoustic emissions (AE). Here, we generalize these results and explore additional ML and deep learning (DL) methods for labquake prediction. Key questions include whether improved ML/DL methods can outperform existing models, including prediction based on limited training, or if such methods can successfully forecast beyond a single seismic cycle for aperiodic failure. We describe significant improvements to existing methods of labquake prediction using simple AE statistics (variance) and DL models such as Long-Short Term Memory (LSTM) and Convolution Neural Network (CNN). We demonstrate: 1) that LSTMs and CNNs predict labquakes under a variety of conditions, including pre-seismic creep, aperiodic events and alternating slow and fast events and 2) that fault zone stress can be predicted with fidelity (accuracy in terms of R2 > 0.92), confirming that acoustic energy is a fingerprint of the fault zone stress. We predict also time to start of failure (TTsF) and time to the end of Failure (TTeF). Interestingly, TTeF is successfully predicted in all seismic cycles, while the TTsF prediction varies with the amount of fault creep before an event. We also report on a novel autoregressive forecasting method to predict future fault zone states, focusing on shear stress. This forecasting model is distinct from existing predictive models, which predict only the current state. We compare three modern approaches in sequence modeling framework: LSTM, Temporal Convolution Network (TCN) and Transformer Network (TF). Results are encouraging in forecasting the shear stress at long-term future horizons, autoregressively. Our ML/DL prediction models outperform the state of the art and our autoregressive model represents a novel forecasting framework that could enhance current methods of earthquake forecasting.

How to cite: Laurenti, L., Tinti, E., Galasso, F., Franco, L., and Marone, C.: Deep learning for laboratory earthquake prediction and autoregressive forecasting of fault zone stress, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9833, https://doi.org/10.5194/egusphere-egu22-9833, 2022.

EGU22-10157 | Presentations | ITS2.6/AS5.1

How land cover changes affect ecosystem productivity 

Andreas Krause, Phillip Papastefanou, Konstantin Gregor, Lucia Layritz, Christian S. Zang, Allan Buras, Xing Li, Jingfeng Xiao, and Anja Rammig

Historically, many forests worldwide were cut down and replaced by agriculture. While this substantially reduced terrestrial carbon storage, the impacts of land-use change on ecosystem productivity have not been adequately resolved yet.

Here, we apply the machine learning algorithm Random Forests to predict the potential gross primary productivity (GPP) of forests, grasslands, and croplands around the globe using high-resolution datasets of satellite-derived GPP, land cover, and 20 environmental predictor variables.

With a mean potential GPP of around 2.0 kg C m-2 yr-1 forests are the most productive land cover on two thirds of the global suitable area, while grasslands and croplands are on average 23 and 9% less productive, respectively. These findings are robust against alternative input datasets and algorithms, even though results are somewhat sensitive to the underlying land cover map.

Combining our potential GPP maps with a land-use reconstruction from the Land-Use Harmonization project (LUH2) we estimate that historical agricultural expansion reduced global GPP by around 6.3 Gt C yr-1 (4.4%). This reduction in GPP induced by land cover changes is amplified in some future scenarios as a result of ongoing deforestation but partly reversed in other scenarios due to agricultural abandonment.

Finally, we compare our potential GPP maps to simulations from eight CMIP6 Earth System Models with an explicit representation of land management. While the mean GPP values of the ESM ensemble show reasonable agreement with our estimates, individual Earth System Models simulate large deviations both in terms of mean GPP values of different land cover types as well as in their spatial variations. Reducing these model biases would lead to more reliable simulations concerning the potential of land-based mitigation policies.

How to cite: Krause, A., Papastefanou, P., Gregor, K., Layritz, L., Zang, C. S., Buras, A., Li, X., Xiao, J., and Rammig, A.: How land cover changes affect ecosystem productivity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10157, https://doi.org/10.5194/egusphere-egu22-10157, 2022.

EGU22-10519 | Presentations | ITS2.6/AS5.1 | Highlight

Adaptive Bias Correction for Improved Subseasonal Forecasting 

Soukayna Mouatadid, Paulo Orenstein, Genevieve Flaspohler, Miruna Oprescu, Judah Cohen, Franklyn Wang, Sean Knight, Maria Geogdzhayeva, Sam Levang, Ernest Fraenkel, and Lester Mackey

Improving our ability to forecast the weather and climate is of interest to all sectors of the economy and government agencies from the local to the national level. In fact, weather forecasts 0-10 days ahead and climate forecasts seasons to decades ahead are currently used operationally in decision-making, and the accuracy and reliability of these forecasts has improved consistently in recent decades. However, many critical applications require subseasonal forecasts with lead times in between these two timescales. Subseasonal forecasting—predicting temperature and precipitation 2-6 weeks ahead—is indeed critical for effective water allocation, wildfire management, and drought and flood mitigation. Yet, accurate forecasts for the subseasonal regime are still lacking due to the chaotic nature of weather.

While short-term forecasting accuracy is largely sustained by physics-based dynamical models, these deterministic methods have limited subseasonal accuracy due to chaos. Indeed, subseasonal forecasting has long been considered a “predictability desert” due to its complex dependence on both local weather and global climate variables. Nevertheless, recent large-scale research efforts have advanced the subseasonal capabilities of operational physics-based models, while parallel efforts have demonstrated the value of machine learning and deep learning methods in improving subseasonal forecasting.

To counter the systematic errors of dynamical models at longer lead times, we introduce an adaptive bias correction (ABC) method that combines state-of-the-art dynamical forecasts with observations using machine learning. We evaluate our adaptive bias correction method in the contiguous U.S. over the years 2011-2020 and demonstrate consistent improvement over standard meteorological baselines, state-of-the-art learning models, and the leading subseasonal dynamical models, as measured by root mean squared error and uncentered anomaly correlation skill. When applied to the United States’ operational climate forecast system (CFSv2), ABC improves temperature forecasting skill by 20-47% and precipitation forecasting skill by 200-350%. When applied to the leading subseasonal model from the European Centre for Medium-Range Weather Forecasts (ECMWF), ABC improves temperature forecasting skill by 8-38% and precipitation forecasting skill by 40-80%.

Overall, we find that de-biasing dynamical forecasts with our learned adaptive bias correction method yields an effective and computationally inexpensive strategy for generating improved subseasonal forecasts and building the next generation of subseasonal forecasting benchmarks. To facilitate future subseasonal benchmarking and development, we release our model code through the subseasonal_toolkit Python package and our routinely updated SubseasonalClimateUSA dataset through the subseasonal_data Python package.

How to cite: Mouatadid, S., Orenstein, P., Flaspohler, G., Oprescu, M., Cohen, J., Wang, F., Knight, S., Geogdzhayeva, M., Levang, S., Fraenkel, E., and Mackey, L.: Adaptive Bias Correction for Improved Subseasonal Forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10519, https://doi.org/10.5194/egusphere-egu22-10519, 2022.

EGU22-10711 | Presentations | ITS2.6/AS5.1

A new approach toward integrated inversion of reflection seismic and gravity datasets using deep learning 

Mahtab Rashidifard, Jeremie Giraud, Mark Jessell, and Mark Lindsay

Reflection seismic data, although sparsely distributed due to the high cost of acquisition, is the only type of data that can provide high-resolution images of the crust to reveal deep subsurface structures and the architectural complexity that may vector attention to minerally prospective regions. However, these datasets are not commonly considered in integrated geophysical inversion approaches due to computationally expensive forward modeling and inversion. Common inversion techniques on reflection seismic images are mostly utilized and developed for basin studies and have very limited application for hard-rock studies. Post-stack acoustic impedance inversions, for example, rely a lot on extracted petrophysical information along drilling borehole for depth correction purposes which are not necessarily available. Furthermore, the available techniques do not allow simple, automatic integration of seismic inversion with other geophysical datasets. 

 

 We introduce a new methodology that allows the utilization of the seismic images within the gravity inversion technique with the purpose of 3D boundary parametrization of the subsurface. The proposed workflow is a novel approach for incorporating seismic images into the integrated inversion techniques which relies on the image-ray method for depth-to-time domain conversion of seismic datasets. This algorithm uses a convolutional neural network to iterate over seismic images in time and depth domains. This iterative process is functional to compensate for the low depth resolution of the gravity datasets. We use a generalized level-set technique for gravity inversion to link the interfaces of the units with the depth-converted seismic images. The algorithm has been tested on realistic synthetic datasets generated from scenarios corresponding to different deformation histories. The preliminary results of this study suggest that post-stack seismic images can be utilized in integrated geophysical inversion algorithms without the need to run computationally expensive full wave-form inversions.  

How to cite: Rashidifard, M., Giraud, J., Jessell, M., and Lindsay, M.: A new approach toward integrated inversion of reflection seismic and gravity datasets using deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10711, https://doi.org/10.5194/egusphere-egu22-10711, 2022.

EGU22-11043 | Presentations | ITS2.6/AS5.1

Framework for the deployment of DNNs in remote sensing inversion algorithms applied to Copernicus Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P) 

Fabian Romahn, Victor Molina Garcia, Ana del Aguila, Ronny Lutz, and Diego Loyola

In remote sensing, the quantities of interest (e.g. the composition of the atmosphere) are usually not directly observable but can only be inferred indirectly via the measured spectra. To solve these inverse problems, retrieval algorithms are applied that usually depend on complex physical models, so-called radiative transfer models (RTMs). RTMs are very accurate, however also computationally very expensive and therefore often not feasible in combination with the strict time requirements of operational processing of satellite measurements. With the advances in machine learning, the methods of this field, especially deep neural networks (DNN), have become very promising for accelerating and improving the classical remote sensing retrieval algorithms. However, their application is not straightforward but instead quite challenging as there are many aspects to consider and parameters to optimize in order to achieve satisfying results.

In this presentation we show a general framework for replacing the RTM, used in an inversion algorithm, with a DNN that offers sufficient accuracy while at the same time increases the processing performance by several orders of magnitude. The different steps, sampling and generation of the training data, the selection of the DNN hyperparameters, the training and finally the integration of the DNN into an operational environment are explained in detail. We will also focus on optimizing the efficiency of each step: optimizing the generation of training samples through smart sampling techniques, accelerating the training data generation through parallelization and other optimizations of the RTM, application of tools for the DNN hyperparameter optimization as well as the use of automation tools (source code generation) and appropriate interfaces for the efficient integration in operational processing systems.

This procedure has been continuously developed throughout the last years and as a use case, it will be shown how it has been applied in the operational retrieval of cloud properties for the Copernicus satellite sensors Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P).

How to cite: Romahn, F., Molina Garcia, V., del Aguila, A., Lutz, R., and Loyola, D.: Framework for the deployment of DNNs in remote sensing inversion algorithms applied to Copernicus Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11043, https://doi.org/10.5194/egusphere-egu22-11043, 2022.

EGU22-11420 | Presentations | ITS2.6/AS5.1

Histroy Matching for the tuning of coupled models: experiments on the Lorenz 96 model 

Redouane Lguensat, Julie Deshayes, and Venkatramani Balaji

The process of relying on experience and intuition to find good sets of parameters, commonly referred to as "parameter tuning" keeps having a central role in the roadmaps followed by dozens of modeling groups involved in community efforts such as the Coupled Model Intercomparison Project (CMIP). 

In this work, we study a tool from the Uncertainty Quantification community that started recently to draw attention in climate modeling: History Matching also referred to as "Iterative Refocussing". The core idea of History Matching is to run several simulations with different set of parameters and then use observed data to rule-out any parameter settings which are "implausible". Since climate simulation models are computationally heavy and do not allow testing every possible parameter setting, we employ an emulator that can be a cheap and accurate replacement. Here a machine learning algorithm, namely, Gaussian Process Regression is used for the emulating step. History Matching is then a good example where the recent advances in machine learning can be of high interest to climate modeling.

One objective of this study is to evaluate the potential for history matching to tune a climate system with multi-scale dynamics. By using a toy climate model, namely, the Lorenz 96 model, and producing experiments in perfect-model setting, we explore different types of applications of HM and highlight the strenghts and challenges of using such a technique. 

How to cite: Lguensat, R., Deshayes, J., and Balaji, V.: Histroy Matching for the tuning of coupled models: experiments on the Lorenz 96 model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11420, https://doi.org/10.5194/egusphere-egu22-11420, 2022.

EGU22-11465 | Presentations | ITS2.6/AS5.1

Quantile machine learning models for predicting European-wide, high resolution fine-mode Aerosol Optical Depth (AOD) based on ground-based AERONET and satellite AOD data 

Zhao-Yue Chen, Raul Méndez-Turrubiates, Hervé Petetin, Aleks Lacima, Albert Soret Miravet, Carlos Pérez García-Pando, and Joan Ballester

Air pollution is a major environmental risk factor for human health. Among the different air pollutants, Particulate Matter (PM) arises as the most prominent one, with increasing health effects over the last decades. According to the Global Burden of Disease, PM contributed to 4.14 million premature deaths globally in 2019, over twice as much as in 1990 (2.04 million). With these numbers in mind, the assessment of ambient PM exposure becomes a key issue in environmental epidemiology. However, the limited number of ground-level sites measuring daily PM values is a major constraint for the development of large-scale, high-resolution epidemiological studies.

In the last five years, there has been a growing number of initiatives estimating ground-level PM concentrations based on satellite Aerosol Optical Depth (AOD) data, representing a low-cost alternative with higher spatial coverage compared to ground-level measurements. At present, the most popular AOD product is NASA’s MODIS (Moderate Resolution Imaging Spectroradiometer), but the data that it provides is restricted to Total Aerosol Optical Depth (TAOD). Compared with TAOD, Fine-mode Aerosol Optical Depth (FAOD) better describes the distribution of small-diameter particles (e.g. PM10 and PM2.5), which are generally those associated with anthropogenic activity. Complementarily, AERONET (AErosol RObotic NETwork, which is the network of ground-based sun photometers), additionally provide Fine- and Coarse-mode Aerosol Optical Depth (FAOD and CAOD) products based on Spectral Deconvolution Algorithms (SDA).

Within the framework of the ERC project EARLY-ADAPT (https://early-adapt.eu/), which aims to disentangle the association between human health, climate variability and air pollution to better estimate the early adaptation response to climate change, here we develop quantile machine learning models to further advance in the association between AERONET FAOD and satellite AOD over Europe during the last two decades. Due to large missing data form satellite estimations, we also included the AOD estimates from ECMWF’s Copernicus Atmosphere Monitoring Service Global Reanalysis (CAMSRA) and NASA’s Modern-Era Retrospective Analysis for Research and Applications v2 (MERRA-2), together with atmosphere, land and ocean variables such as boundary layer height, downward UV radiation and cloud cover from ECMWF’s ERA5-Land.

The models were thoroughly validated with spatial cross-validation. Preliminary results show that the R2 of the three AOD estimates (TAOD, FAOD and CAOD) predicted with quantile machine learning models range between 0.61 and 0.78, and the RMSE between 0.02 and 0.03. For the Pearson correlation with ground-level PM2.5, the predicted FAOD is highest (0.38), while 0.18, 0.11 and 0.09 are for Satellite, MERRA-2, CAMSRA AOD, respectively. This study provides three useful indicators for further estimating PM, which could improve our understanding of air pollution in Europe and open new avenues for large-scale, high-resolution environmental epidemiology studies.

How to cite: Chen, Z.-Y., Méndez-Turrubiates, R., Petetin, H., Lacima, A., Soret Miravet, A., Pérez García-Pando, C., and Ballester, J.: Quantile machine learning models for predicting European-wide, high resolution fine-mode Aerosol Optical Depth (AOD) based on ground-based AERONET and satellite AOD data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11465, https://doi.org/10.5194/egusphere-egu22-11465, 2022.

EGU22-11924 | Presentations | ITS2.6/AS5.1

Automated detection and classification of synoptic scale fronts from atmospheric data grids 

Stefan Niebler, Peter Spichtinger, Annette Miltenberger, and Bertil Schmidt

Automatic determination of fronts from atmospheric data is an important task for weather prediction as well as for research of synoptic scale phenomena. We developed a deep neural network to detect and classify fronts from multi-level ERA5 reanalysis data. Model training and prediction is evaluated using two different regions covering Europe and North America with data from two weather services. Due to a label deformation step performed during training we are able to directly generate frontal lines with no further thinning during post processing. Our network compares well against the weather service labels with a Critical Success Index higher than 66.9% and a Object Detection Rate of more than 77.3%. Additionally the frontal climatologies generated from our networks ouput are highly correlated (greater than 77.2%) to climatologies created from weather service data. Evaluation of cross sections of our detection results provide further insight in the characteristics of our predicted fronts and show that our networks classification is physically plausible.

How to cite: Niebler, S., Spichtinger, P., Miltenberger, A., and Schmidt, B.: Automated detection and classification of synoptic scale fronts from atmospheric data grids, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11924, https://doi.org/10.5194/egusphere-egu22-11924, 2022.

EGU22-12043 | Presentations | ITS2.6/AS5.1

A Domain-Change Approach to the Semantic Labelling of Remote Sensing Images 

Chandrabali Karmakar, Gottfried Schwartz, Corneliu Octavian Dumitru, and Mihai Datcu

For many years, image classification – mainly based on pixel brightness statistics – has been among the most popular remote sensing applications. However, during recent years, many users were more and more interested in the application-oriented semantic labelling of remotely sensed image objects being depicted in given images.


In parallel, the development of deep learning algorithms has led to several powerful image classification and annotation tools that became popular in the remote sensing community. In most cases, these publicly available tools combine efficient algorithms with expert knowledge and/or external information ingested during an initial training phase, and we often encounter two alternative types of deep learning approaches, namely Autoencoders (AEs) and Convolutional Neural Networks (CNNs). Both approaches try to convert the pixel data of remote sensing images into semantic maps of the imaged areas. In our case, we made an attempt to provide an efficient new semantic annotation tool that helps in the semantic interpretation of newly recorded images with known and/or possibly unknown content.


Typical cases are remote sensing images depicting unexpected and hitherto uncharted phenomena such as flooding events or destroyed infrastructure. When we resort to the commonly applied AE or CNN software packages we cannot expect that existing statistics, or a few initial ground-truth annotations made by an image interpreter, will automatically lead to a perfect understanding of the image content. Instead, we have to discover and combine a number of additional relationships that define the actual content of a selected image and many of its characteristics.

Our approach consists of a two-stage domain-change approach where we first convert an image into a purely mathematical ‘topic representation’ initially introduced by Blei [1]. This representation provides statistics-based topics that do not yet require final application-oriented labelling describing physical categories or phenomena and support the idea of explainable machine learning [2]. Then, during a second stage, we try to derive physical image content categories by exploiting a weighted multi-level neural network approach that converts weighted topics into individual application-oriented labels. This domain-changing learning stage limits label noise and is initially supported by an image interpreter allowing the joint use of pixel statistics and expert knowledge [3]. The activity of the image interpreter can be limited to a few image patches. We tested our approach on a number of different use cases (e.g., polar ice, agriculture, natural disasters) and found that our concept provides promising results.  


[1] D.M. Blei, A.Y. Ng, and M.I. Jordan, (2003). Latent Dirichlet Allocation, Journal of Machine Learning Research, Vol. 3, pp. 993-1022.
[2] C. Karmakar, C.O. Dumitru, G. Schwarz, and M. Datcu (2020). Feature-free explainable data mining in SAR images using latent Dirichlet allocation, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, pp. 676-689.
[3] C.O. Dumitru, G. Schwarz, and M. Datcu (2021). Semantic Labelling of Globally Distributed Urban and Non-Urban Satellite Images Using High-Resolution SAR Data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 15, pp. 6009-6068.

How to cite: Karmakar, C., Schwartz, G., Dumitru, C. O., and Datcu, M.: A Domain-Change Approach to the Semantic Labelling of Remote Sensing Images, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12043, https://doi.org/10.5194/egusphere-egu22-12043, 2022.

EGU22-12489 | Presentations | ITS2.6/AS5.1

“Fully-automated” clustering method for stress inversions (CluStress) 

Lukács Kuslits, Lili Czirok, and István Bozsó

As it is well-known, stress fields are responsible for earthquake formation. In order to analyse stress relations in a study area using focal mechanisms’ (FMS) inversions, it is vital to consider three fundamental criteria:

(1)       The investigated area is characterized by a homogeneous stress field.

(2)       The earthquakes occur with variable directions on pre-existing faults.

(3)       The deviation of the fault slip vector from the shear stress vector is minimal (Wallace-Bott hypothesis).

The authors have attempted to develop a “fully-automated” algorithm to carry out the classification of the earthquakes as a prerequisite of stress estimations. This algorithm does not call for the setting of hyper-parameters, thus subjectivity can be reduced significantly and the running time can also decrease. Nevertheless, there is an optional hyper-parameter that is eligible to filter outliers, isolated points (earthquakes) in the input dataset.

In this presentation, they show the operation of this algorithm in case of synthetic datasets consisting of different groups of FMS and a real seismic dataset. The latter come from a survey area in the earthquake-prone Vrancea-zone (Romania). This is a relatively small region (around 30*70 km) in the external part of SE-Carpathians where the distribution of the seismic events is quite dense and heterogeneous.

It shall be noted that though the initial results are promising, further developments are still necessary. The source codes are soon to be uploaded to a public GitHub repository which will be available for the whole scientific community.

How to cite: Kuslits, L., Czirok, L., and Bozsó, I.: “Fully-automated” clustering method for stress inversions (CluStress), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12489, https://doi.org/10.5194/egusphere-egu22-12489, 2022.

EGU22-12549 | Presentations | ITS2.6/AS5.1

Joint calibration and mapping of satellite altimetry data using trainable variaitional models 

Quentin Febvre, Ronan Fablet, Julien Le Sommer, and Clément Ubelmann

Satellite radar altimeters are a key source of observation of ocean surface dynamics. However, current sensor technology and mapping techniques do not yet allow to systematically resolve scales smaller than 100km. With their new sensors, upcoming wide-swath altimeter missions such as SWOT should help resolve finer scales. Current mapping techniques rely on the quality of the input data, which is why the raw data go through multiple preprocessing stages before being used. Those calibration stages are improved and refined over many years and represent a challenge when a new type of sensor start acquiring data.

We show how a data-driven variational data assimilation framework could be used to jointly learn a calibration operator and an interpolator from non-calibrated data . The proposed framework significantly outperforms the operational state-of-the-art mapping pipeline and truly benefits from wide-swath data to resolve finer scales on the global map as well as in the SWOT sensor geometry.

 

How to cite: Febvre, Q., Fablet, R., Le Sommer, J., and Ubelmann, C.: Joint calibration and mapping of satellite altimetry data using trainable variaitional models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12549, https://doi.org/10.5194/egusphere-egu22-12549, 2022.

EGU22-12574 | Presentations | ITS2.6/AS5.1 | Highlight

SWIFT-AI: Significant Speed-up in Modelling the Stratospheric Ozone Layer 

Helge Mohn, Daniel Kreyling, Ingo Wohltmann, Ralph Lehmann, Peter Maass, and Markus Rex

Common representations of the stratospheric ozone layer in climate modeling are widely considered only in a very simplified way. Neglecting the mutual interactions of ozone with atmospheric temperature and dynamics has the effect of making climate projections less accurate. Although, more elaborate and interactive models of the stratospheric ozone layer are available, they require far too much computation time to be coupled with climate models. Our aim with this project was to break new ground and pursue an interdisciplinary strategy that spans the fields of machine learning, atmospheric physics and climate modelling.

In this work, we present an implicit neural representation of the extrapolar stratospheric ozone chemistry (SWIFT-AI). An implicitly defined hyperspace of the stratospheric ozone chemistry offers a continuous and even differentiable representation that can be parameterized by artificial neural networks. We analysed different parameter-efficient variants of multilayer perceptrons. This was followed by an intensive, as far as possible energy-efficient search for hyperparameters involving Bayesian optimisation and early stopping techniques.

Our data source is the Lagrangian chemistry and transport model ATLAS. Using its full model of stratospheric ozone chemistry, we focused on simulating a wide range of stratospheric variability that will occur in future climate (e.g. temperature and meridional circulation changes). We conducted a simulation for several years and created a data-set with over 200E+6 input and output pairs. Each output is the 24h ozone tendency of a trajectory. We performed a dimensionality reduction of the input parameters by using the concept of chemical families and by performing a sensitivity analysis to choose a set of robust input parameters.

We coupled the resulting machine learning models with the Lagrangian chemistry and transport model ATLAS, substituting the full stratospheric chemistry model. We validated a two-year simulation run by comparing to the differences in accuracy and computation time from both the full stratospheric chemistry model and the previous polynomial approach of extrapolar SWIFT. We found that SWIFT-AI consistently outperforms the previous polynomial approach of SWIFT, both in terms of test data and simulation results. We discovered that the computation time of SWIFT-AI is more than twice as fast as the previous polynomial approach SWIFT and 700 times faster than the full stratospheric chemistry scheme of ATLAS, resulting in minutes instead of weeks of computation time per model year – a speed-up of several orders of magnitude.

To ensure reproducibility and transparency, we developed a machine learning pipeline, published a benchmark dataset and made our repository open to the public.

In summary, we could show that the application of state-of-the-art machine learning methods to the field of atmospheric physics holds great potential. The achieved speed-up of an interactive and very precise ozone layer enables a novel way of representing the ozone layer in climate models. This in turn will increase the quality of climate projections, which are crucial for policy makers and of great importance for our planet.

How to cite: Mohn, H., Kreyling, D., Wohltmann, I., Lehmann, R., Maass, P., and Rex, M.: SWIFT-AI: Significant Speed-up in Modelling the Stratospheric Ozone Layer, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12574, https://doi.org/10.5194/egusphere-egu22-12574, 2022.

Recently, an increase in forecast skill of the seasonal climate forecast for winter in Europe has been achieved through an ensemble subsampling approach by way of predicting the mean winter North Atlantic Oscillation (NAO) index through linear regression (based on the autumn state of the four predictors sea surface temperature, Arctic sea ice volume, Eurasian snow depth and stratospheric temperature) and the sampling of the ensemble members which are able to reproduce this NAO state. This thesis shows that the statistical prediction of the NAO index can be further improved via nonlinear methods using the same predictor variables as in the linear approach. This likely also leads to an increase in seasonal climate forecast skill. The data used for the calculations stems from the global reanalysis by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5. The available time span for use in this thesis covered only 40 years from 1980 till 2020, hence it was important to use a method that still yields statistically significant and meaningful results under those circumstances. The nonlinear method chosen was k-nearest neighbor, which is a simple, yet powerful algorithm when there is not a lot of data available. Compared to other methods like neural networks it is easy to interpret. The resulting method has been developed and tested in a double cross-validation setting. While sea ice in the Barents-Kara sea in September-October shows the most predictive capability for the NAO index in the subsequent winter as a single predictor, the highest forecast skill is achieved through a combination of different predictor variables.

How to cite: Hauke, C., Ahrens, B., and Dalelane, C.: Prediction of the North Atlantic Oscillation index for the winter months December-January-February via nonlinear methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12628, https://doi.org/10.5194/egusphere-egu22-12628, 2022.

EGU22-12765 | Presentations | ITS2.6/AS5.1

Supervised machine learning to estimate instabilities in chaotic systems: computation of local Lyapunov exponents 

Daniel Ayers, Jack Lau, Javier Amezcua, Alberto Carrassi, and Varun Ojha

Weather and climate are well known exemplars of chaotic systems exhibiting extreme sensitivity to initial conditions. Initial condition errors are subject to exponential growth on average, but the rate and the characteristic of such growth is highly state dependent. In an ideal setting where the degree of predictability of the system is known in real-time, it may be possible and beneficial to take adaptive measures. For instance a local decrease of predictability may be counteracted by increasing the time- or space-resolution of the model computation or the ensemble size in the context of ensemble-based data assimilation or probabilistic forecasting.

Local Lyapunov exponents (LLEs) describe growth rates along a finite-time section of a system trajectory. This makes the LLEs the ideal quantities to measure the local degree of predictability, yet a main bottleneck for their real-time use in  operational scenarios is the huge computational cost. Calculating LLEs involves computing a long trajectory of the system, propagating perturbations with the tangent linear model, and repeatedly orthogonalising them. We investigate if machine learning (ML) methods can estimate the LLEs based only on information from the system’s solution, thus avoiding the need to evolve perturbations via the tangent linear model. We test the ability of four algorithms (regression tree, multilayer perceptron, convolutional neural network and long short-term memory network) to perform this task in two prototypical low dimensional chaotic dynamical systems. Our results suggest that the accuracy of the ML predictions is highly dependent upon the nature of the distribution of the LLE values in phase space: large prediction errors occur in regions of the attractor where the LLE values are highly non-smooth.  In line with classical dynamical systems studies, the neutral LLE is more difficult to predict. We show that a comparatively simple regression tree can achieve performance that is similar to sophisticated neural networks, and that the success of ML strategies for exploiting the temporal structure of data depends on the system dynamics.

How to cite: Ayers, D., Lau, J., Amezcua, J., Carrassi, A., and Ojha, V.: Supervised machine learning to estimate instabilities in chaotic systems: computation of local Lyapunov exponents, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12765, https://doi.org/10.5194/egusphere-egu22-12765, 2022.

EGU22-13228 | Presentations | ITS2.6/AS5.1 | Highlight

Developing a data-driven ocean forecast system 

Rachel Furner, Peter Haynes, Dan Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh

The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, process-based weather and climate models (simulators) represent the best tools we have to predict, understand and potentially mitigate the impacts of climate change and extreme weather. However, these models are incredibly complex and require huge amounts of High Performance Computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models by developing data-driven emulators.

Here I discuss recent work to develop a data-driven model of the ocean, an integral part of the weather and climate system. Much recent progress has been made with developing data-driven forecast systems of atmospheric weather, highlighting the promise of these systems. These techniques can also be applied to the ocean, however modelling of the ocean poses some fundamental differences and challenges in comparison to modelling the atmosphere, for example, oceanic flow is bathymetrically constrained across a wide range of spatial and temporal scales.

We train a neural network on the output from an expensive process-based simulator of an idealised channel configuration of oceanic flow. We show the model is able to learn well the complex dynamics of the system, replicating the mean flow and details within the flow over single prediction steps. We also see that when iterating the model, predictions remain stable, and continue to match the ‘truth’ over a short-term forecast period, here around a week.

 

How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean forecast system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13228, https://doi.org/10.5194/egusphere-egu22-13228, 2022.

EGU22-486 | Presentations | ITS2.1/PS1.2

Enhancing planetary imagery with the holistic attention network algorithm 

Denis Maxheimer, Ioannis Markonis, Masner Jan, Curin Vojtech, Pavlik Jan, and Solomonidou Anezina

The recent developments in computer vision research in the field of Single Image Super Resolution (SISR)

can help improve the satellite imagery data quality and, thus, find application in planetary exploration.

The aim of this study is to enhance planetary surface imagery, in planetary bodies that there are

available data but in a low resolution. Here, we have applied the holistic attention network (HAN)

algorithm to a set of images of Saturn’s moon Titan from the Titan Radar Mapper instrument in its

Synthetic Aperture Radar (SAR) mode, which was on board the Cassini spacecraft. HAN can find

correlations among hierarchical layers, channels of each layer, and all positions of each channel, which

can be interpreted as an application and intersection of previously known models. The algorithm used

in our case-study was trained on 5000 grayscale images from HydroSHED Earth surface imagery dataset

resampled over different resolutions. Our experimental setup was to generate High Resolution (HR)

imagery from eight times lower resolution (x8 scale). We followed the standard workflow for this

purpose, which is to first train the network enhancing x2 scale to HR, then x4 scale to x2 scale, and

finally x8 scale to x4 scale, using subsequently the results of the previous training. The promising results

open a path for further applications of the trained model to improve the imagery data quality, and aid

in the detection and analysis of planetary surface features.

How to cite: Maxheimer, D., Markonis, I., Jan, M., Vojtech, C., Jan, P., and Anezina, S.: Enhancing planetary imagery with the holistic attention network algorithm, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-486, https://doi.org/10.5194/egusphere-egu22-486, 2022.

EGU22-692 | Presentations | ITS2.1/PS1.2

Autonomous lineament detection in Galileo images of Europa 

Caroline Haslebacher and Nicolas Thomas

Lineaments are prominent features on the surface of Jupiter's moon Europa. Analysing these linear features thoroughly leads to insights on their formation mechanisms and the interactions between the subsurface ocean and the surface. The orientation and position of lineaments is also important for determining the stress field on Europa. The Europa Clipper mission is planned to launch in 2024 and will fly by Europa more than 40 times. In the light of this, an autonomous lineament detection and segmentation tool would prove useful for processing the vast amount of expected images efficiently and would help to identify processes affecting the ice sheet. 

We have trained a convolutional neural network to detect, classify and segment lineaments in images of Europa returned by the Galileo mission. The Galileo images that make up the training set are segmented manually, following a dedicated guideline. For better performance, we make use of synthetically generated data to pre-train the network. The current status of the work will be described.

How to cite: Haslebacher, C. and Thomas, N.: Autonomous lineament detection in Galileo images of Europa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-692, https://doi.org/10.5194/egusphere-egu22-692, 2022.

EGU22-1014 | Presentations | ITS2.1/PS1.2

Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER II 

Emmanuel De Leon, Nicolas Gilet, Xavier Vallières, Luca Bucciantini, Pierre Henri, and Jean-Louis Rauch

The Waves of HIgh frequency and Sounder for Probing Electron density by Relaxation
(WHISPER) instrument, is part of the Wave Experiment Consortium (WEC) of the CLUSTER II
mission. The instrument consists of a receiver, a transmitter, and a wave spectrum
analyzer. It delivers active (when in sounding mode) and natural electric field spectra. The
characteristic signature of waves indicates the nature of the ambient plasma regime and, combined
with the spacecraft position, reveals the different magnetosphere boundaries and regions. The
thermal electron density can be deduced from the characteristics of natural waves in natural mode
and from the resonances triggered in sounding mode, giving access to a key parameter of scientific
interest and major driver for the calibration of particles instrument.
Until recently, the electron density derivation required a manual time/frequency domain
initialization of the search algorithms, based upon visual inspection of WHISPER active and natural
spectrograms and other datasets from different instruments onboard CLUSTER.
To automate this process, knowledge of the region (plasma regime) is highly desirable. A Multi-
Layer Perceptron model has been implemented for this purpose. For each detected region, a GRU,
recurrent network model combined with an ad-hoc algorithm is then used to determine the electron
density from WHISPER active spectra. These models have been trained using the electron density
previously derived from various semi-automatic algorithms and manually validated, resulting in an
accuracy up to 98% in some plasma regions. A production pipeline based on these models has been
implemented to routinely derive electron density, reducing human intervention up to 10 times. Work
is currently ongoing to create some models to process natural measurements where the data volume
is much higher and the validation process more complex. These models of electron density
automated determination will be useful for future other space missions.

How to cite: De Leon, E., Gilet, N., Vallières, X., Bucciantini, L., Henri, P., and Rauch, J.-L.: Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER II, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1014, https://doi.org/10.5194/egusphere-egu22-1014, 2022.

EGU22-2765 | Presentations | ITS2.1/PS1.2

Extrapolation of CRISM based spectral feature maps using CaSSIS four-band images with machine learning techniques 

Michael Fernandes, Nicolas Thomas, Benedikt Elser, Angelo Pio Rossi, Alexander Pletl, and Gabriele Cremonese

Spectroscopy provides important information on the surface composition of Mars. Spectral data can support studies such as the evaluation of potential (manned) landing sites as well as supporting determination of past surface processes. The CRISM instrument on NASA’s Mars Reconnaissance Orbiter is a high spectral resolution visible infrared mapping spectrometer currently in orbit around Mars. It records 2D spatially resolved spectra over a wavelength range of 362 nm to 3920 nm. At present data collected covers less than 2% of the planet. Lifetime issues with the cryo-coolers prevents limits further data acquisition in the infrared band. In order to extend areal coverage for spectroscopic analysis in regions of major importance to the history of liquid water on Mars (e.g. Valles Marineris, Noachis Terra), we investigate whether data from other instruments can be fused to extrapolate spectral features in CRISMto these non-spectral imaged areas. The present work will use data from the CaSSIS instrument which is a high spatial resolution colour and stereo imager onboard the European Space Agency’s ExoMars Trace Gas Orbiter (TGO). CaSSIS returns images at 4.5 m/px from the nominal 400 km altitude orbit in four colours. Its filters were selected to provide mineral diagnostics in the visible wavelength range (400 – 1100 nm). It has so far imaged around 2% of the planet with an estimated overlap of ≲0.01% of CRISM data. This study introduces a two-step pixel based reconstruction approach using CaSSIS four band images. In the first step advanced unsupervised techniques are applied on CRISM hyperspectral datacubes to reduce dimensionality and establish clusters of spectral features. Given that these clusters contain reasonable information about the surface composition, in a second step, it is feasible to map CaSSIS four band images to the spectral clusters by training a machine learning classifier (for the cluster labels) using only CaSSIS datasets. In this way the system can extrapolate spectral features to areas unmapped by CRISM. To assess the performance of this proposed methodology we analyzed actual and artificially generated CaSSIS images and benchmarked results against traditional correlation based methods. Qualitative and quantitative analyses indicate that by this novel procedure spectral features of in non-spectral imaged areas can be predicted to an extent that can be evaluated quantitatively, especially in highly feature-rich landscapes.

How to cite: Fernandes, M., Thomas, N., Elser, B., Rossi, A. P., Pletl, A., and Cremonese, G.: Extrapolation of CRISM based spectral feature maps using CaSSIS four-band images with machine learning techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2765, https://doi.org/10.5194/egusphere-egu22-2765, 2022.

EGU22-2994 | Presentations | ITS2.1/PS1.2

Interpretable Solar Flare Prediction with Deep Learning 

Robert Jarolim, Astrid Veronig, Tatiana Podladchikova, Julia Thalmann, Dominik Narnhofer, Markus Hofinger, and Thomas Pock

Solar flares and coronal mass ejections (CMEs) are the main drivers for severe space weather disturbances on Earth and other planets. While the geo-effects of CMEs give us a lead time of about 1 to 4 days, the effects of flares and flare-accelerated solar energetic particles (SEPs) are very immediate, 8 minutes for the enhanced radiation and as short as about 20 minutes for the highest energy SEPs arriving at Earth. Thus, predictions of solar flare occurrence at least several hours ahead are of high importance for the mitigation of severe space weather effects.

Observations and simulations of solar flares suggest that the structure and evolution of the active region’s magnetic field is a key component for energetic eruptions. The recent advances in deep learning provide tools to directly learn complex relations from multi-dimensional data. Here, we present a novel deep learning method for short-term solar flare prediction. The algorithm is based on the HMI photospheric line-of-sight magnetic field and its temporal evolution together with the coronal evolution as observed by multi-wavelengths EUV filtergrams from the AIA instrument onboard the Solar Dynamics Observatory. We train a neural network to independently identify features in the imaging data based on the dynamic evolution of the coronal structure and the photospheric magnetic field evolution, which may hint at flare occurrence in the near future.

We show that our method  can reliably predict flares six hours ahead, with 73% correct flaring predictions (89% when considering only M- and X-class flares), and 83% correct quiet active region predictions.

In order to overcome the “black box problem” of machine-learning algorithms, and thus to allow for physical interpretation of the network findings, we employ a spatio-temporal attention mechanism. This allows us to extract the emphasized regions, which reveal the neural network interpretation of the flare onset conditions. Our comparison shows that predicted precursors are associated with the position of flare occurrence, respond to dynamic changes, and align with characteristics within the active region.

How to cite: Jarolim, R., Veronig, A., Podladchikova, T., Thalmann, J., Narnhofer, D., Hofinger, M., and Pock, T.: Interpretable Solar Flare Prediction with Deep Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2994, https://doi.org/10.5194/egusphere-egu22-2994, 2022.

EGU22-5721 | Presentations | ITS2.1/PS1.2

Magnetopause and bow shock models with machine learning 

Ambre Ghisalberti, Nicolas Aunai, and Bayane Michotte de Welle

The magnetopause (MP) and the bow shock (BS) are the two boundaries bounding the magnetosheath, the region between the magnetosphere and the solar wind. Their position and shape depend on the upstream solar wind and interplanetary magnetic field conditions.

Predicting their shape and position is the starting point of many subsequent studies of processes controlling the coupling between the Earth’s magnetosphere and its interplanetary environment. We now have at our disposal an important amount of data from a multitude of spacecraft missions allowing for good spatial coverage, as well as algorithms based on statistical learning to automatically detect the two boundaries. From the data of 9 satellites over 20 years, we identified around 19000 crossings of the BS and 36000 crossings of the MP. They were used, together with their associated upstream conditions, to train a regression model to predict the shape and position of the boundaries. 

Preliminary results indicate that the obtained models outperform analytical models without making simplifying assumptions on the geometry and the dependency over control parameters.

How to cite: Ghisalberti, A., Aunai, N., and Michotte de Welle, B.: Magnetopause and bow shock models with machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5721, https://doi.org/10.5194/egusphere-egu22-5721, 2022.

EGU22-5739 | Presentations | ITS2.1/PS1.2

Deep learning for surrogate modeling of two-dimensional mantle convection 

Siddhant Agarwal, Nicola Tosi, Pan Kessel, Doris Breuer, and Grégoire Montavon

Mantle convection plays a fundamental role in the long-term thermal evolution of terrestrial planets like Earth, Mars, Mercury and Venus. The buoyancy-driven creeping flow of silicate rocks in the mantle is modeled as a highly viscous fluid over geological time scales and quantified using partial differential equations (PDEs) for conservation of mass, momentum and energy. Yet, key parameters and initial conditions to these PDEs are poorly constrained and often require a large sampling of the parameter space to find constraints from observational data. Since it is not computationally feasible to solve hundreds of thousands of forward models in 2D or 3D, some alternatives have been proposed. 

The traditional alternative to high-fidelity simulations has been to use 1D models based on scaling laws. While computationally efficient, these are limited in the amount of physics they can model (e.g., depth-dependent material properties) and predict only mean quantities such as the mean mantle temperature. Hence, there has been a growing interest in machine learning techniques to come up with more advanced surrogate models. For example, Agarwal et al. (2020) used feedforward neural networks (FNNs) to reliably predict the evolution of entire 1D laterally averaged temperature profile in time from five parameters: reference viscosity, enrichment factor for the crust in heat producing elements, initial mantle temperature, activation energy and activation volume of the diffusion creep. 

We extend that study to predict the full 2D temperature field, which contains more information in the form of convection structures such as hot plumes and cold downwellings. This is achieved by training deep learning algorithms on a data set of 10,525 2D simulations of the thermal evolution of the mantle of a Mars-like planet. First, we use convolutional autoencoders to compress the size of each temperature field by a factor of 142. Second,  we compare the use of two algorithms for predicting the compressed (latent) temperature fields: FNNs and long-short-term memory networks (LSTMs).  On the one hand, the FNN predictions are slightly more accurate with respect to unseen simulations (99.30%  vs. 99.22% for the LSTM). On the other hand, Proper orthogonal decomposition (POD) of the LSTM and FNN predictions shows that despite a lower mean relative accuracy, LSTMs capture the flow dynamics better than FNNs. The POD coefficients from FNN predictions sum up to 96.51% relative to the coefficients of the original simulations, while for LSTMs this metric increases to 97.66%. 

We conclude the talk by stating some strengths and weaknesses of this approach, as well as highlighting some ongoing research in the broader field of fluid dynamics that could help increase the accuracy and efficiency of such parameterized surrogate models.

How to cite: Agarwal, S., Tosi, N., Kessel, P., Breuer, D., and Montavon, G.: Deep learning for surrogate modeling of two-dimensional mantle convection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5739, https://doi.org/10.5194/egusphere-egu22-5739, 2022.

EGU22-6371 | Presentations | ITS2.1/PS1.2

STIX solar flare image reconstruction and classification using machine learning 

Hualin Xiao, Säm Krucker, Daniel Ryan, Andrea Battaglia, Erica Lastufka, Etesi László, Ewan Dickson, and Wen Wang

The Spectrometer Telescope for Imaging X-rays (STIX) is an instrument onboard Solar Orbiter. It measures X-rays emitted during solar flares in the energy range from 4 to 150 keV and takes X-ray images by using an indirect imaging technique, based on the Moiré effect. STIX instrument
consists of 32 pairs of tungsten grids and 32 pixelated CdTe detector units. Flare Images can be reconstructed on the ground using algorithms such as back-projection, forward-fit, and maximum-entropy after full pixel data are downloaded. Here we report a new image reconstruction and
classification model based on machine learning. Results will be discussed and compared with those from the traditional algorithms.

How to cite: Xiao, H., Krucker, S., Ryan, D., Battaglia, A., Lastufka, E., László, E., Dickson, E., and Wang, W.: STIX solar flare image reconstruction and classification using machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6371, https://doi.org/10.5194/egusphere-egu22-6371, 2022.

EGU22-8940 | Presentations | ITS2.1/PS1.2

Mars events polyphonic detection, segmentation and classification with a hybrid recurrent scattering neural network using InSight mission data 

Salma Barkaoui, Angel Bueno Rodriguez, Philippe Lognonné, Maarten De Hoop, Grégory Sainton, Mathieu Plasman, and Taichi kawamura

Since deployed on the Martian surface, the seismometer SEIS (Seismic Experiment for Interior Structure) and the APSS (Auxiliary Payload Sensors Suite) of the InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission have been recorded the daily Martian respectively ground acceleration and pressure. These data are essential to investigate the geophysical and atmospheric features of the red planet. So far, the InSight team were able to detect multiple Martian events. We distinguish two types: the artificial events like the lander modes or the micro-tilts known as glitches or the natural events like the pressure drops which are important to estimate the Martian subsurface and the seismic events used to study the interior structure of Mars. Despite the data complexity, the InSight team was able to catalog these events (Clinton et al 2020 for the seismic event catalog, Banfield et al., 2018, 2020 for the pressure drops catalog and Scholz et al. (2020) for the glitches catalog). However, despite all this effort, we are still in front of multiple challenges. In fact,  the seismic events' detection is limited  due to the SEIS sensitivity, which is the origin of a high noise level that may contaminate the seismic events. Thus, we can miss some of them, especially in the noisy period. Besides, their detection is very challenging and require multiple preprocessing task which is time-consuming. For the pressure drops, the detection method used in Banfield et al.  2020 is limited by a threshold equal to 0.3 Pa. Thus, the rest of pressure drops are not included. Plus, due to lack of energy, the pressure sensor was off for several days. As a result, many pressure drops were missed. As a result, being able to detect them directly on the SEIS data which are, in contrast,  provided continuously, is very important.

In this regard, the aim of this study is to overcome these challenges and thus improve the Martian events detection and provide an updated catalog automatically. For that, we were inspired of one of the main technics used today in data processing and analysis in a complete automatic way: it is the Machine Learning and particularly in our case is the Deep Learning. The architecture used for that is the “Hybrid Recurrent Scattering Neural Network” (Bueno et al 2021)  adapted for Mars

How to cite: Barkaoui, S., Bueno Rodriguez, A., Lognonné, P., De Hoop, M., Sainton, G., Plasman, M., and kawamura, T.: Mars events polyphonic detection, segmentation and classification with a hybrid recurrent scattering neural network using InSight mission data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8940, https://doi.org/10.5194/egusphere-egu22-8940, 2022.

EGU22-9077 | Presentations | ITS2.1/PS1.2

Automatic Detection of Interplanetary Coronal Mass Ejections 

Hannah Ruedisser, Andreas Windisch, Ute V. Amerstorfer, Tanja Amerstorfer, Christian Möstl, Martin A. Reiss, and Rachel L. Bailey

Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past,
different machine learning approaches have been used to automatically detect events in existing time series resulting from
solar wind in situ data. However, classification, early detection and ultimately forecasting still remain challenges when facing
the large amount of data from different instruments. We propose a pipeline using a Network similar to the ResUNet++ (Jha et al. (2019)), for the automatic detection of ICMEs. Comparing it to an existing method, we find that while achieving similar results, our model outperforms the baseline regarding GPU usage, training time and robustness to missing features, thus making it more usable for other datasets.
The method has been tested on in situ data from WIND. Additionally, it produced reasonable results on STEREO A and STEREO B datasets
with less input parameters. The relatively fast training allows straightforward tuning of hyperparameters and could therefore easily be used to detect other structures and phenomena in solar wind data, such as corotating interaction regions.

How to cite: Ruedisser, H., Windisch, A., Amerstorfer, U. V., Amerstorfer, T., Möstl, C., Reiss, M. A., and Bailey, R. L.: Automatic Detection of Interplanetary Coronal Mass Ejections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9077, https://doi.org/10.5194/egusphere-egu22-9077, 2022.

EGU22-9621 | Presentations | ITS2.1/PS1.2

Machine Learning Techniques for Automated ULF Wave Recognition in Swarm Time Series 

Georgios Balasis, Alexandra Antonopoulou, Constantinos Papadimitriou, Adamantia Zoe Boutsi, Omiros Giannakis, and Ioannis A. Daglis

Machine learning (ML) techniques have been successfully introduced in the fields of Space Physics and Space Weather, yielding highly promising results in modeling and predicting many disparate aspects of the geospace. Magnetospheric ultra-low frequency (ULF) waves play a key role in the dynamics of the near-Earth electromagnetic environment and, therefore, their importance in Space Weather studies is indisputable. Magnetic field measurements from recent multi-satellite missions are currently advancing our knowledge on the physics of ULF waves. In particular, Swarm satellites have contributed to the expansion of data availability in the topside ionosphere, stimulating much recent progress in this area. Coupled with the new successful developments in artificial intelligence, we are now able to use more robust approaches for automated ULF wave identification and classification. Here, we present results employing various neural networks (NNs) methods (e.g. Fuzzy Artificial Neural Networks, Convolutional Neural Networks) in order to detect ULF waves in the time series of low-Earth orbit (LEO) satellites. The outputs of the methods are compared against other ML classifiers (e.g. k-Nearest Neighbors (kNN), Support Vector Machines (SVM)), showing a clear dominance of the NNs in successfully classifying wave events.

How to cite: Balasis, G., Antonopoulou, A., Papadimitriou, C., Boutsi, A. Z., Giannakis, O., and Daglis, I. A.: Machine Learning Techniques for Automated ULF Wave Recognition in Swarm Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9621, https://doi.org/10.5194/egusphere-egu22-9621, 2022.

The solar wind and its variability is well understood at Earth. However, at distances larger than 1AU the is less clear, mostly due to the lack of in-situ measurements. In this study we use transfer learning principles to infer solar wind conditions at Mars in periods where no measurements are available, with the aim of better illuminating the interaction between the partially magnetised Martian plasma environment and the upstream solar wind. Initially, a convolutional neural network (CNN) model for forecasting measurements of the interplanetary magnetic field, solar wind velocity, density and dynamic pressure is trained on terrestrial solar wind data. Afterwards, knowledge from this model is incorporated into a secondary CNN model which is used for predicting solar wind conditions upstream of Mars up to 5 hours in the future. We present the results of this study as well as the opportunities to expand this method for use at other planets.

How to cite: Durward, S.: Forecasting solar wind conditions at Mars using transfer learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10105, https://doi.org/10.5194/egusphere-egu22-10105, 2022.

EGU22-11501 | Presentations | ITS2.1/PS1.2

Automatic detection of solar magnetic tornadoes based on computer vision methods. 

Dmitrii Vorobev, Mark Blumenau, Mikhail Fridman, Olga Khabarova, and Vladimir Obridko

We propose a new method for automatic detection of solar magnetic tornadoes based on computer vision methods. Magnetic tornadoes are magneto-plasma structures with a swirling magnetic field in the solar corona, and there is also evidence for the rotation of plasma in them. A theoretical description and numerical modeling of these objects are very difficult due to the three-dimensionality of the structures and peculiarities of their spatial and temporal dynamics [Wedemeyer-Böhm et al, 2012, Nature]. Typical sizes of magnetic tornadoes vary from 102 km up to 106 km, and their lifetime is from several minutes to many hours. So far, quite a few works are devoted to their study, and there are no accepted algorithms for detecting solar magnetic tornadoes by machine methods. An insufficient number of identified structures is one of many problems that do not allow studying physics of magnetic tornadoes and the processes associated with them. In particular, the filamentous rotating structures are well delectable only at the limb, while one can only make suppositions about their presence at the solar disk.
Our method is based on analyzing SDO/AIA images at wavelengths 171 Å, 193 Å, 211 Å and 304 Å, to which several different algorithms are applied, namely, the convolution with filters, convolutional neural network, and gradient boosting. The new technique is a combination of several approaches (transfer learning & stacking) that are widely used in various fields of data analysis. Such an approach allows detecting the structures in a short time with sufficient accuracy. As test objects, we used magnetic tornadoes previously described in the literature [e.g., Wedemeyer et al 2013, ApJ; Mghebrishvili et al. 2015 ApJ]. Our method made it possible to detect those structures, as well as to reveal previously unknown magnetic tornadoes.

How to cite: Vorobev, D., Blumenau, M., Fridman, M., Khabarova, O., and Obridko, V.: Automatic detection of solar magnetic tornadoes based on computer vision methods., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11501, https://doi.org/10.5194/egusphere-egu22-11501, 2022.

EGU22-12480 | Presentations | ITS2.1/PS1.2

A versatile exploration method for simulated data based on Self Organizing Maps 

Maria Elena Innocenti, Sophia Köhne, Simon Hornisch, Rainer Grauer, Jorge Amaya, Jimmy Raeder, Banafsheh Ferdousi, James "Andy" Edmond, and Giovanni Lapenta

The large amount of data produced by measurements and simulations of space plasmas has made it fertile ground for the application of classification methods, that can support the scientist in preliminary data analysis. Among the different classification methods available, Self Organizing Maps, SOMs [Kohonen, 1982] offer the distinct advantage of producing an ordered, lower-dimensional representation of the input data that preserves their topographical relations. The 2D map obtained after training can then be explored to gather knowledge on the data it represents. The distance between nodes reflects the distance between the input data: one can then further cluster the map nodes to identify large scale regions in the data where plasma properties are expected to be similar.

In this work, we train SOMs using data from different simulations of different aspects of the heliospheric environment: a global magnetospheric simulation done with the OpenGGCM-CTIM-RCM code, a Particle In Cell simulation of plasmoid instability done with the semi-implicit code ECSIM, a fully kinetic simulation of single X point reconnection done with the Vlasov code implemented in MuPhy2.

We examine the SOM feature maps, unified distance matrix and SOM node weights to unlock information on the input data. We then classify the nodes of the different SOMs into a lower and automatically selected number of clusters, and we obtain, in all three cases, clusters that map well to our a priori knowledge on the three systems. Results for the magnetospheric simulations are described in Innocenti et al, 2021. 

This classification strategy then emerges as a useful, relatively cheap and versatile technique for the analysis of simulation, and possibly observational, plasma physics data.

Innocenti, M. E., Amaya, J., Raeder, J., Dupuis, R., Ferdousi, B., & Lapenta, G. (2021). Unsupervised classification of simulated magnetospheric regions. Annales Geophysicae Discussions, 1-28. 

https://angeo.copernicus.org/articles/39/861/2021/angeo-39-861-2021.pdf

How to cite: Innocenti, M. E., Köhne, S., Hornisch, S., Grauer, R., Amaya, J., Raeder, J., Ferdousi, B., Edmond, J. "., and Lapenta, G.: A versatile exploration method for simulated data based on Self Organizing Maps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12480, https://doi.org/10.5194/egusphere-egu22-12480, 2022.

EGU22-12830 | Presentations | ITS2.1/PS1.2

Re-implementing and Extending the NURD Algorithm to the Full Duration of the Van Allen Probes Mission 

Matyas Szabo-Roberts, Karolina Kume, Artem Smirnov, Irina Zhelavskaya, and Yuri Shprits

Generating reliable databases of electron density measurements over a wide range of geomagnetic conditions is essential for improving empirical models of electron density. The Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm has been developed for automated extraction of electron density from Van Allen Probes electric field measurements, and has been shown to be in good agreement with existing semi-automated methods and empirical models. The extracted electron density data has since then been used to develop the PINE (Plasma density in the Inner magnetosphere Neural network-based Empirical) model, an empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices.
In this study we re-implement the NURD algorithm in both Python and Matlab, and compare the performance of these implementations to each other and previous NURD results. We take advantage of a labeled training data set now being available for the full duration of the Van Allen Probes mission to train the network and generate an electron density data set for a significantly longer time period. We perform detailed comparisons between this output, electron density produced from Van Allen Probes electric field measurements using the AURA semi-automated algorithm, and electron density obtained from existing empirical models. We also present preliminary results from the PINE plasmasphere model trained on this extended NURD electron density data set.

How to cite: Szabo-Roberts, M., Kume, K., Smirnov, A., Zhelavskaya, I., and Shprits, Y.: Re-implementing and Extending the NURD Algorithm to the Full Duration of the Van Allen Probes Mission, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12830, https://doi.org/10.5194/egusphere-egu22-12830, 2022.

The ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management (FG-AI4NDM) explores the potential of AI to support the monitoring and detection, forecasting, and communication of natural disasters. Building on the presentation at EGU2021, we will show how detailed analysis of real-life use cases by an interdisciplinary, multistakeholder, and international community of experts is leading to the development of three technical reports (dedicated to best practices in data collection and handling, AI-based algorithms, and AI-based communications technologies, respectively), a roadmap of ongoing pre-standardization and standardization activities in this domain, a glossary of relevant terms and definitions, and educational materials to support capacity building. It is hoped that these deliverables will form the foundation of internationally recognized standards.

How to cite: Kuglitsch, M.: Nature can be disruptive, so can technology: ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8, https://doi.org/10.5194/egusphere-egu22-8, 2022.

EGU22-79 | Presentations | ITS2.5/NH10.8

Assessing the impact of sea-level rise on future compound flooding hazards in the Kapuas River delta 

Joko Sampurno, Valentin Vallaeys, Randy Ardianto, and Emmanuel Hanert

Compound flooding hazard in estuarine delta is increasing due to mean sea-level rise (SLR) as the impact of climate change. Decision-makers need future hazard analysis to mitigate the event and design adaptation strategies. However, to date, no future hazard analysis has been made for the Kapuas River delta, a low-lying area on the west coast of the island of Borneo, Indonesia. Therefore, this study aims to assess future compound flooding hazards under SLR over the delta, particularly in Pontianak (the densest urban area over the region). Here we consider three SLR scenarios due to climate change, i.e., low emission scenario (RCP2.6), medium emission scenario (RCP4.5), and high emission scenario (RCP8.5). We implement a machine-learning technique, i.e., the multiple linear regression (MLR) algorithm, to model the river water level dynamics within the city. We then predict future extreme river water levels due to interactions of river discharges, rainfalls, winds, and tides. Furthermore, we create flood maps with a likelihood of areas to be flooded in 100 years return period (1% annual exceedance probability) due to the expected sea-level rise. We find that the extreme 1% return water level for the study area in 2100 is increased from about 2.80 m (current flood frequency state) to 3.03 m (under the RCP2.6), to 3.13 m (under the RCP4.5), and 3.38 m (under the RCP8.5).

How to cite: Sampurno, J., Vallaeys, V., Ardianto, R., and Hanert, E.: Assessing the impact of sea-level rise on future compound flooding hazards in the Kapuas River delta, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-79, https://doi.org/10.5194/egusphere-egu22-79, 2022.

According to UNDRR2021, there are 389 reported disasters in 2020. Disasters claim the lives of 15,080 people, 98.4 million people are affected globally, and US171.3 billion dollars are spent on economic damage. International agreements such as the Sendai framework for disaster risk reduction encourage the use of social media to strengthen disaster risk communication. With the advent of new technologies, social media has emerged out to be an important source of information in disaster management, and there is an increase in social media activity whilst disasters. Social media is the fourth most used platform for accessing emergency information. People seek to contact family, friends and search for food, water, transportation, and shelter. During cataclysmic events, the critical information posted on social media is immersed in irrelevant information. To assist and streamline emergency situations, staunch methodologies are required for extracting relevant information. The research study explores new-fangled deep learning methods for automatically identifying the relevancy of disaster-related social media messages. The contributions of this study are three-fold. Firstly, we present a hybrid deep learning-based framework to ameliorate the classification of disaster-related social media messages. The data is gathered from the Twitter platform, using the Search Application Programming Interface. The messages that contain information regarding the need, availability of vital resources like food, water, electricity, etc., and provide situational information are categorized into relevant messages. The rest of the messages are categorized into irrelevant messages. To demonstrate the applicability and effectiveness of the proposed approach, it is applied to the thunderstorm and cyclone Fani dataset. Both the disasters happened in India in 2019. Secondly, the performance of the proposed approach is compared with baseline methods, i.e., convolutional neural network, long short-term memory network, bidirectional long short-term memory network. The results of the proposed approach outperform the baseline methods. The performance of the proposed approach is evaluated using multiple metrics. The considered evaluation metrics are accuracy, precision, recall, f-score, area under receiver operating curve, area under precision-recall curve. The accurate and inaccurate classifications are shown on both the datasets. Thirdly, to incorporate our evaluated models into a working application, we extend an existing application DisDSS, which has been granted copyright invention award by Government of India. We call the newly extended system DisDSS 2.0, which integrates our framework to address the disaster relevancy identification issue. The output from the research study is helpful for disaster managers to make effective decisions on time. It bridges the gap between the decision-makers and citizens during disasters through the lens of deep learning.

How to cite: Singla, A., Agrawal, R., and Garg, A.: DisDSS 2.0: A Multi-Hazard Web-based Disaster Management System to Identify Disaster-Relevancy of a Social Media Message for Decision-Making Using Deep Learning Techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-266, https://doi.org/10.5194/egusphere-egu22-266, 2022.

Background and objective: The fields of urban resilience to flooding and data science are on a collision course giving rise to the emerging field of smart resilience. The objective of this study is to propose and demonstrate a smart flood resilience framework that leverages various heterogeneous community-scale big data and infrastructure sensor data to enhance predictive risk monitoring and situational awareness.

Smart flood resilience framework: The smart flood resilience framework focuses on four core capabilities that could be augmented through the use of heterogeneous community-scale big data and analytics techniques: (1) predictive flood risk mapping: prediction capability of imminent flood risks (such as overflow of channels) to inform communities and emergency management agencies to take preparation and response actions; (2) automated rapid impact assessment: the ability to automatically and quickly evaluate the extent of flood impacts (i.e., physical, social, and economic impacts) to enable crisis responders and public officials to allocate relief and rescue resources on time; (3) predictive infrastructure failure prediction and monitoring: the ability to anticipate imminent failures in infrastructure systems as a flood event unfolds; and (4) smart situational awareness capabilities: the capability to derive proactive insights regarding the evolution of flood impacts (e.g., disrupted access to critical facilities and spatio-temporal patterns of recovery) on the communities.

Case study: We demonstrate the components of these core capabilities in the smart flood resilience framework in the context of the 2017 Hurricane Harvey in Harris. First, with Bayesian network modeling and deep learning methods, we reveal the use of flood sensor data for the prediction of floodwater overflow in channel networks and inundation of co-located road networks. Second, we discuss the use of social media data and machine learning techniques for assessing the impacts of floods on communities and sensing emotion signals to examine societal impacts. Third, we illustrate the use of high-resolution traffic data in network-theoretic models for now-casting of flood propagation on road networks and the disrupted access to critical facilities such as hospitals. Fourth, we leverage location-based and credit card transaction data in advanced spatial data analytics to proactively evaluate the recovery of communities and the impacts of floods on businesses.

Significances: This study shows that the significance of different core capabilities of the smart flood resilience framework in helping emergency managers, city planners, public officials, responders, and volunteers to better cope with the impacts of catastrophic flooding events.

How to cite: Mostafavi, A. and Yuan, F.: Smart Flood Resilience: Harnessing Community-Scale Big Data for Predictive Flood Risk Monitoring, Rapid Impact Assessment, and Situational Awareness, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-781, https://doi.org/10.5194/egusphere-egu22-781, 2022.

Overview:

Operations Risk Insight (ORI) with Watson is an IBM AI application on the cloud.  ORI analyzes thousands of news sources and alert services daily.  There are too many data sources, warnings, watches and advisories for an individual to understand.  For example, during a week in 2021 with record wildfires, hurricanes and COVID hotspots across the US, thousands of impacting risk events hit key points of interest to IBM globally and were analyzed in real time.  

Which events impacted IBM’s business, and which didn’t? ORI has saved IBM millions of dollars annually for the past 5 years.  Our non-profit disaster relief partners have used ORI to respond more effectively to the needs of the vulnerable groups impacted by disasters.  Find out how disaster response leaders identify severe risks using Watson, the Hybrid Cloud, Big Data, Machine Learning and AI.

Presentation Objectives:

The objectives of this session are:

  • Educate the audience on a pragmatic and relevant IBM internal use case for an AI on the Cloud application, using many Watson and The Weather Company API's, plus machine learning running on IBM's cloud.
  • Obtain feedback and suggestions from the audience on how to expand and improve the machine learning and data analysis for this application to expanded the value for natural disaster response leaders. .
  • Inspire others to create their own grass roots cognitive project and learn more about AI and cloud technologies.
  • Discuss how this relates to the Call for Code and is used by Disaster Relief Agencies for free to assist the most vulnerable in society.

References Links:  

  • ORI has been featured in two Cloud Pak for Data (CP4D) workbooks:  CP4D Watson Studio Tutorial on Risk Analysis: https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/f2ee8dbf-e6af-4b00-90ca-8f7fee77c377/view and the Flood Risk Project: https://dataplatform.dev.cloud.ibm.com/exchange/public/entry/view/def444923c771f3f20285820dc072eac  Each demonstrate the application and methods for Machine Learning to be applied to AI for Natural Disaster Management (NDM). 
  • IBM use case for non-profit partners: https://newsroom.ibm.com/ORI-nonprofits-disaster
  • NC Tech article: https://www.ednc.org/nonprofits-and-artificial-intelligence-join-forces-for-covid-19-relief/
  • Supply Chain Management Review (SCMR) interview: https://www.scmr.com/article/nextgen_supply_chain_interview_tom_ward
  • Supply Chain navigator article: http://scnavigator.avnet.com/article/january-2017/the-missing-link/

How to cite: Ward, T. and Kanwar, R.: IBM Operations Risk Insights with Watson:  a multi-hazard risk, AI for Natural Disaster Management use case, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1230, https://doi.org/10.5194/egusphere-egu22-1230, 2022.

EGU22-1510 | Presentations | ITS2.5/NH10.8

From virtual environment to real observations: short-term hydrological forecasts with an Artificial Neural Network model. 

Renaud Jougla, Manon Ahlouche, Morgan Buire, and Robert Leconte

Machine learning model approaches for hydrological forecasts are nowadays common in research. Artificial Neural Network (ANN) is one of the most popular due to its good performance on watersheds with different hydrologic regimes and over several timescales. A short-term (1 to 7 days ahead) forecast model was explored to predict streamflow. This study focused on the summer season defined from May to October. Cross-validation was done over a period of 16 years, each time keeping a single year as a validation set.

The ANN model was parameterized with a single hidden layer of 6 neurons. It was developed in a virtual environment based on datasets generated by the physically based distributed hydrological model Hydrotel (Fortin et al., 2012). In a preliminary analysis, several combinations of inputs were assessed, the best combining precipitation and temperature with surface soil moisture and antecedent streamflow. Different spatial discretizations were compared. A semi-distributed discretization was selected to facilitate transferring the ANN model from a virtual environment to real observations such as remote sensing soil moisture products or ground station time series.

Four watersheds were under study: the Au Saumon and Magog watersheds located in south Québec (Canada); the Androscoggin watershed in Maine (USA); and the Susquehanna watershed located in New-York and Pennsylvania (USA). All but the Susquehanna watershed are mainly forested, while the latter has a 57% forest cover. To evaluate whether a model with a data-driven structure can mimic a deterministic model, ANN and Hydrotel simulated flows were compared. Results confirm that the ANN model can reproduce streamflow output from Hydrotel with confidence.

Soil moisture observation stations were deployed in the Au Saumon and Magog watersheds during the summers 2018 to 2021. Meteorological data were extracted from the ERA5-Land reanalysis dataset. As the period of availability of observed data is short, the ANN model was trained in a virtual environment. Two validations were done: one in the virtual environment and one using real soil moisture observations and flows. The number and locations of the soil moisture probes slightly differed during each of the four summers. Therefore, four models were trained depending on the number of probes and their location. Results highlight that location of the soil moisture probes has a large influence on the ANN streamflow outputs and identifies more representative sub-regions of the watershed.

The use of remote sensing data as inputs of the ANN model is promising. Soil moisture datasets from SMOS and SMAP missions are available for the four watersheds under study, although downscaling approaches should be applied to bring the spatial resolution of those products at the watershed scale. One other future lead could be the development of a semi-distributed ANN model in virtual environment based on a restricted selection of hydrological units based on physiographic characteristics. The future L-band NiSAR product could be relevant for this purpose, having a finer spatial resolution compared to SMAP and SMOS and a better penetration of the signal in forested areas than C-band SAR satellites such as Sentinel-1 and the Radarsat Constellation Mission.

How to cite: Jougla, R., Ahlouche, M., Buire, M., and Leconte, R.: From virtual environment to real observations: short-term hydrological forecasts with an Artificial Neural Network model., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1510, https://doi.org/10.5194/egusphere-egu22-1510, 2022.

Tropical Cyclones (TCs) are deadly but rare events that cause considerable loss of life and property damage every year. Traditional TC forecasting and tracking methods focus on numerical forecasting models, synoptic forecasting and statistical methods. However, in recent years there have been several studies investigating applications of Deep Learning (DL) methods for weather forecasting with encouraging results.

We aim to test the efficacy of several DL methods for TC nowcasting, particularly focusing on Generative Adversarial Neural Networks (GANs) and Recurrent Neural Networks (RNNs). The strengths of these network types align well with the given problem: GANs are particularly apt to learn the form of a dataset, such as the typical shape and intensity of a TC, and RNNs are useful for learning timeseries data, enabling a prediction to be made based on the past several timesteps.

The goal is to produce a DL based pipeline to predict the future state of a developing cyclone with accuracy that measures up to current methods.  We demonstrate our approach based on learning from high-resolution numerical simulations of TCs from the Indian and Pacific oceans and discuss the challenges and advantages of applying these DL approaches to large high-resolution numerical weather data.

How to cite: Steptoe, H. and Xirouchaki, T.: Deep Learning for Tropical Cyclone Nowcasting: Experiments with Generative Adversarial and Recurrent Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1650, https://doi.org/10.5194/egusphere-egu22-1650, 2022.

EGU22-1662 | Presentations | ITS2.5/NH10.8

Exploring the challenges of Digital Twins for weather & climate through an Atmospheric Dispersion modelling prototype 

Stephen Haddad, Peter Killick, Aaron Hopkinson, Tomasz Trzeciak, Mark Burgoyne, and Susan Leadbetter

Digital Twins present a new user-centric paradigm for developing and using weather & climate simulations that is currently being widely embraced, for example through large projects such as Destination Earth led by ECMWF.  In this project we have taken a smaller scale approach in understanding the opportunities and challenges in translating the Digital Twin concept from the original domain of manufacturing and the built environment to modelling of the earth’s atmosphere.

We describe our approach to creating a Digital Twin based on the Met Office’s Atmospheric Dispersion simulation package called NAME. We will discuss the advantages of doing this, such as the ability of nonexpert users to more easily produce scientifically valid simulations of dispersion events, such as industrial fires, and easily obtain results to feed into downstream analysis, for example of health impacts. We will describe the requirements of each of the key components of a digital twin and potential implementation approaches.

We will describe how a Digital Twin framework enables multiple models to be joined together to model complex systems as required for atmospheric concentrations around chemical spills or fires modelled by NAME. Overall, we outline a potential project blueprint for future work to improve usability and scientific throughput of existing modelling systems by creating a Digital Twins from core current modelling code and data gathering systems.

How to cite: Haddad, S., Killick, P., Hopkinson, A., Trzeciak, T., Burgoyne, M., and Leadbetter, S.: Exploring the challenges of Digital Twins for weather & climate through an Atmospheric Dispersion modelling prototype, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1662, https://doi.org/10.5194/egusphere-egu22-1662, 2022.

Massive groundwater pumping for agricultural and industrial activities results in significant land subsidence in the arid world. In an acute water crisis, monitoring land subsidence and its key drivers is essential to assist groundwater depletion mitigation strategy. Physical models for aquifer simulation related to land deformation are computationally expensive. The interferometric synthetic aperture radar (InSAR) technique provides precise deformation mapping yet is affected by tropospheric and ionospheric errors. This study explores the capabilities of the deep learning approach coupled with satellite-derived variables in modeling subsidence, spatially and temporally, from 2016 to 2020 and predicting subsidence in the near future by using a recurrent neural network (RNN) in the Shabestar basin, Iran. The basin is part of the Urmia Lake River Basin, embracing 6.4 million people, yet has been primarily desiccated due to the over-usage of water resources in the basin. The deep learning model incorporates InSAR-derived land subsidence and its satellite-based key drivers such as actual evapotranspiration, Normalized Difference Vegetation Index (NDVI), land surface temperature, precipitation to yield the importance of critical drivers to inform groundwater governance. The land deformation in the area varied between -93.2 mm/year to 16 mm/year on average in 2016-2020. Our findings reveal that precipitation, evapotranspiration, and vegetation coverage primarily affected land subsidence; furthermore, the subsidence rate is predicted to increase rapidly. The phenomenon has the same trend with the variation of the Urmia Lake level. This study demonstrates the potential of artificial intelligence incorporating satellite-based ancillary data in land subsidence monitoring and prediction and contributes to future groundwater management.

How to cite: Zhang, Y. and Hashemi, H.: InSAR-Deep learning approach for simulation and prediction of land subsidence in arid regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2011, https://doi.org/10.5194/egusphere-egu22-2011, 2022.

EGU22-2879 | Presentations | ITS2.5/NH10.8

Automatically detecting avalanches with machine learning in optical SPOT6/7 satellite imagery 

Elisabeth D. Hafner, Patrick Barton, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler, and Yves Bühler

Safety related applications like avalanche warning or risk management depend on timely information about avalanche occurrence. Knowledge on the locations and sizes of avalanches releasing is crucial for the responsible decision-makers. Such information is still collected today in a non-systematic way by observes in the field, for example from ski resort patrols or community avalanche services. Consequently, the existing avalanche mapping is, in particular in situations with high avalanche danger, strongly biased towards accessible terrain in proximity to (winter sport) infrastructure.

Recently, remote sensing has been shown to be capable of partly filling this gap, providing spatially continuous information on avalanche occurrences over large regions. In previous work we applied optical SPOT 6/7 satellite imagery to manually map two avalanche periods over a large part of the swiss Alps (2018: 12’500 and 2019: 9’500 km2). Subsequently, we investigated the reliability of this mapping and proved its suitability by identifying almost ¾ of all occurred avalanches (larger size 1) from SPOT 6/7 imagery. Therefore, optical SPOT data is an excellent source for continuous avalanche mapping, currently restricted by the time intensive manual mapping. To speed up this process we now propose a fully convolutional neural network (CNN) called AvaNet. AvaNet is based on a Deeplabv3+ architecture adapted to specifically learn how avalanches look like by explicitly including height information from a digital terrain model (DTM) for example. Relying on the manually mapped 24’737 avalanches for training, validation and testing, AvaNet achieves an F1 score of 62.5% when thresholding the probabilities from the network predictions at 0.5. In this study we present the results from our network in more detail, including different model variations and results of predictions on data from a third avalanche period we did not train on.

The ability to automate the mapping and therefor quickly identify avalanches from satellite imagery is an important step forward in regularly acquiring spatially continuous avalanche occurrence data. This enables the provision of essential information for the complementation of avalanche databases, making Alpine regions safer.

How to cite: Hafner, E. D., Barton, P., Caye Daudt, R., Wegner, J. D., Schindler, K., and Bühler, Y.: Automatically detecting avalanches with machine learning in optical SPOT6/7 satellite imagery, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2879, https://doi.org/10.5194/egusphere-egu22-2879, 2022.

EGU22-3212 | Presentations | ITS2.5/NH10.8

Predicting Landslide Susceptibility in Cross River State of Nigeria using Machine Learning 

Joel Efiong, Devalsam Eni, Josiah Obiefuna, and Sylvia Etu

Landslides have continued to wreck its havoc in many parts of the globe; comprehensive studies of landslide susceptibilities of many of these areas are either lacking or inadequate. Hence, this study was aimed at predicting landslide susceptibility in Cross River State of Nigeria, using machine learning. Precisely, the frequency ratio (FR) model was adopted in this study. In adopting this approach, a landslide inventory map was developed using 72 landslide locations identified during fieldwork combined with other relevant data sources. Using appropriate geostatistical analyst tools within a geographical information environment, the landslide locations were randomly divided into two parts in the ratio of 7:3 for the training and validation processes respectively. A total of 12 landslide causing factors, such as; elevation, slope, aspect, profile curvature, plan curvature, topographic position index, topographic wetness index, stream power index, land use/land cover, geology, distance to waterbody and distance to major roads, were selected and used in the spatial relationship analysis of the factors influencing landslide occurrences in the study area. FR model was then developed using the training sample of the landslide to investigate landslide susceptibility in Cross River State which was subsequently validated. It was found out that the distribution of landslides in Cross River State of Nigeria was largely controlled by a combined effect of geo-environmental factors such as elevation of 250 – 500m, slope gradient of >35o, slopes facing the southwest direction, decreasing degree of both positive and negative curvatures, increasing values of topographic position index, fragile sands, sparse vegetation, especially in settlement and bare surfaces areas, distance to waterbody and major road of < 500m. About 46% of the mapped area was found to be at landslide susceptibility risk zones, ranging from moderate – very high levels. The susceptibility model was validated with 90.90% accuracy. This study has shown a comprehensive investigation of landslide susceptibility in Cross River State which will be useful in land use planning and mitigation measures against landslide induced vulnerability in the study area including extrapolation of the findings to proffer solutions to other areas with similar environmental conditions. This is a novel use of a machine learning technique in hazard susceptibility mapping.

 

Keywords: Landslide; Landslide Susceptibility mapping; Cross River State, Nigeria; Frequency ratio, Machine learning

How to cite: Efiong, J., Eni, D., Obiefuna, J., and Etu, S.: Predicting Landslide Susceptibility in Cross River State of Nigeria using Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3212, https://doi.org/10.5194/egusphere-egu22-3212, 2022.

EGU22-3283 | Presentations | ITS2.5/NH10.8

Assessment of Flood-Damaged Cropland Trends Under Future Climate Scenarios Using Convolutional Neural Network 

Rehenuma Lazin, Xinyi Shen, and Emmanouil Anagnostou

Every year flood causes severe damages in the cropland area leading to global food insecurity. As climate change continues, floods are predicted to be more frequent in the future. To cope with the future climate impacts, mitigate damages, and ensure food security, it is now imperative to study the future flood damage trends in the cropland area. In this study, we use a convolutional neural network (CNN) to estimate the damages (in acre) in the corn and soybean lands across the mid-western USA with projections from climate models. Here, we extend the application of the CNN model developed by Lazin et. al, (2021) that shows ~25% mean relative error for county-level flood-damaged crop loss estimation. The meteorological variables are derived from the reference gridMet datasets as predictors to train the model from 2008-2020. We then use downscaled climate projections from Multivariate Adaptive Constructed Analogs (MACA) dataset in the trained CNN model to assess future flood damage patterns in the cropland in the early (2011-2040), mid (2041-2070), and late (2071-2100) century, relative to the baseline historical period (1981-2010). Results derived from this study will help understand the crop loss trends due to floods under climate change scenarios and plan necessary arrangements to mitigate damages in the future.

 

Reference:

[1] Lazin, R., Shen, X., & Anagnostou, E. (2021). Estimation of flood-damaged cropland area using a convolutional neural network. Environmental Research Letters16(5), 054011.

How to cite: Lazin, R., Shen, X., and Anagnostou, E.: Assessment of Flood-Damaged Cropland Trends Under Future Climate Scenarios Using Convolutional Neural Network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3283, https://doi.org/10.5194/egusphere-egu22-3283, 2022.

EGU22-3422 | Presentations | ITS2.5/NH10.8

Weather history encoding for machine learning-based snow avalanche detection 

Thomas Gölles, Kathrin Lisa Kapper, Stefan Muckenhuber, and Andreas Trügler

Since its start in 2014, the Copernicus Sentinel-1 programme has provided free of charge, weather independent, and high-resolution satellite Earth observations and has set major scientific advances in the detection of snow avalanches from satellite imagery in motion. Recently, operational avalanche detection from Sentinel-1 synthetic Aperture radar (SAR) images were successfully introduced for some test regions in Norway. However, current state of the art avalanche detection algorithms based on machine learning do not include weather history. We propose a novel way to encode weather data and include it into an automatic avalanche detection pipeline for the Austrian Alps. The approach consists of four steps. At first the raw data in netCDF format is downloaded, which consists of several meteorological parameters over several time steps. In the second step the weather data is downscaled onto the pixel locations of the SAR image. Then the data is aggregated over time, which produces a two-dimensional grid of one value per SAR pixel at the time when the SAR data was recorded. This aggregation function can range from simple averages to full snowpack models. In the final step, the grid is then converted to an image with greyscale values corresponding to the aggregated values. The resulting image is then ready to be fed into the machine learning pipeline. We will include this encoded weather history data to increase the avalanche detection performance, and investigate contributing factors with model interpretability tools and explainable artificial intelligence.

How to cite: Gölles, T., Kapper, K. L., Muckenhuber, S., and Trügler, A.: Weather history encoding for machine learning-based snow avalanche detection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3422, https://doi.org/10.5194/egusphere-egu22-3422, 2022.

EGU22-4250 | Presentations | ITS2.5/NH10.8

Landslide Susceptibility Modeling of an Escarpment in Southern Brazil using Artificial Neural Networks as a Baseline for Modeling Triggering Rainfall 

Luísa Vieira Lucchese, Guilherme Garcia de Oliveira, Alexander Brenning, and Olavo Correa Pedrollo

Landslide Susceptibility Mapping (LSM) and rainfall thresholds are well-documented tools used to model the occurrence of rainfall-induced landslides. In the case of locations where only rainfall can be considered a main landslide trigger, both methodologies apply essentially to the same locations, and a model that encompasses both would be an important step towards a better understanding and prediction of landslide-triggering rainfall events. In this research, we employ spatially cross-validated, hyperparameter tuned Artificial Neural Networks (ANNs) to predict the susceptibility to landslides of an area in southern Brazil. In a next step, we plan to add the triggering rainfall to this Artificial Intelligence model, which will concurrently model the susceptibility and the triggering rainfall event for a given area. The ANN is of type Multi-Layer Perceptron with three layers. The number of neurons in the hidden layer was tuned separately for each cross-validation fold, using a method described in previous work. The study area is the escarpment in the limits of the municipalities of Presidente Getúlio, Rio do Sul, and Ibirama, in southern Brazil. For this area, 82 landslides scars related to the event of December 17th, 2020, were mapped. The metrics for each fold are presented and the final susceptibility map for the area is shown and analyzed. The evaluation metrics attained are satisfactory and the resulting susceptibility map highlights the escarpment areas as most susceptible to landslides. The ANN-based susceptibility mapping in the area is considered successful and seen as a baseline for identifying rainfall thresholds in susceptible areas, which will be accomplished with a combined susceptibility and rainfall model in our future work.

How to cite: Vieira Lucchese, L., Garcia de Oliveira, G., Brenning, A., and Correa Pedrollo, O.: Landslide Susceptibility Modeling of an Escarpment in Southern Brazil using Artificial Neural Networks as a Baseline for Modeling Triggering Rainfall, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4250, https://doi.org/10.5194/egusphere-egu22-4250, 2022.

EGU22-4266 | Presentations | ITS2.5/NH10.8

Camera Rain Gauge Based on Artificial Intelligence 

Raffaele Albano, Nicla Notarangelo, Kohin Hirano, and Aurelia Sole

Flood risk monitoring, alert and adaptation in urban areas require near-real-time fine-scale precipitation observations that are challenging to obtain from currently available measurement networks due to their costs and installation difficulties. In this sense, newly available data sources and computational techniques offer enormous potential, in particular, the exploiting of not-specific, widespread, and accessible devices.

This study proposes an unprecedented system for rainfall monitoring based on artificial intelligence, using deep learning for computer vision, applied to cameras images. As opposed to literature, the method is not device-specific and exploits general-purpose cameras (e.g., smartphones, surveillance cameras, dashboard cameras, etc.), in particular, low-cost device, without requiring parameter setting, timeline shots, or videos. Rainfall is measured directly from single photographs through Deep Learning models based on transfer learning with Convolutional Neural Networks. A binary classification algorithm is developed to detect the presence of rain. Moreover, a multi-class classification algorithm is used to estimate a quasi-instantaneous rainfall intensity range. Open data, dash-cams in Japan coupled with high precision multi-parameter radar XRAIN, and experiments in the NIED Large Scale Rainfall Simulator combined to form heterogeneous and verisimilar datasets for training, validation, and test. Finally, a case study over the Matera urban area (Italy) was used to illustrate the potential and limitations of rainfall monitoring using camera-based detectors.

The prototype was deployed in a real-world operational environment using a pre-existent 5G surveillance camera. The results of the binary classifier showed great robustness and portability: the accuracy and F1-score value were 85.28% and 85.13%, 0.86 and 0.85 for test and deployment, respectively, whereas the literature algorithms suffer from drastic accuracy drops changing the image source (e.g. from 91.92% to 18.82%). The 6-way classifier results reached test average accuracy and macro-averaged F1 values of 77.71% and 0.73, presenting the best performances with no-rain and heavy rainfall, which represents critical condition for flood risk. Thus, the results of the tests and the use-case demonstrate the model’s ability to detect a significant meteorological state for early warning systems. The classification can be performed on single pictures taken in disparate lighting conditions by common acquisition devices, i.e. by static or moving cameras without adjusted parameters. This system does not suit scenes that are also misleading for human visual perception. The proposed method features readiness level, cost-effectiveness, and limited operational requirements that allow an easy and quick implementation by exploiting pre-existent devices with a parsimonious use of economic and computational resources.

Altogether, this study corroborates the potential of non-traditional and opportunistic sensing networks for the development of hydrometeorological monitoring systems in urban areas, where traditional measurement methods encounter limitations, and in data-scarce contexts, e.g. where remote-sensed rainfall information is unavailable or has broad resolution respect with the scale of the proposed study. Future research will involve incremental learning algorithms and further data collection via experiments and crowdsourcing, to improve accuracy and at the same time promote public resilience from a smart city perspective.

How to cite: Albano, R., Notarangelo, N., Hirano, K., and Sole, A.: Camera Rain Gauge Based on Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4266, https://doi.org/10.5194/egusphere-egu22-4266, 2022.

EGU22-4730 | Presentations | ITS2.5/NH10.8

floodGAN – A deep learning-based model for rapid urban flood forecasting 

Julian Hofmann and Holger Schüttrumpf

Recent urban flood events revealed how severe and fast the impacts of heavy rainfall can be. Pluvial floods pose an increasing risk to communities worldwide due to ongoing urbanization and changes in climate patterns. Still, pluvial flood warnings are limited to meteorological forecasts or water level monitoring which are insufficient to warn people against the local and terrain-specific flood risks. Therefore, rapid flood models are essential to implement effective and robust early warning systems to mitigate the risk of pluvial flooding. Although hydrodynamic (HD) models are state-of-the-art for simulation pluvial flood hazards, the required computation times are too long for real-time applications.

In order to overcome the computation time bottleneck of HD models, the deep learning model floodGAN has been developed. FloodGAN combines two adversarial Convolutional Neural Networks (CNN) that are trained on high-resolution rainfall-flood data generated from rainfall generators and HD models. FloodGAN translates the flood forecasting problem into an image-to-image translation task whereby the model learns the non-linear spatial relationships of rainfall and hydraulic data. Thus, it directly translates spatially distributed rainfall forecasts into detailed hazard maps within seconds. Next to the inundation depth, the model can predict the velocities and time periods of hydraulic peaks of an upcoming rainfall event. Due to its image-translation approach, the floodGAN model can be applied for large areas and can be run on standard computer systems, fulfilling the tasks of fast and practical flood warning systems.

To evaluate the accuracy and generalization capabilities of the floodGAN model, numerous performance tests were performed using synthetic rainfall events as well as a past heavy rainfall event of 2018. Therefore, the city of Aachen was used as a case study. Performance tests demonstrated a speedup factor of 106 compared to HD models while maintaining high model quality and accuracy and good generalization capabilities for highly variable rainfall events. Improvements can be obtained by integrating recurrent neural network architectures and training with temporal rainfall series to forecast the dynamics of the flooding processes.

How to cite: Hofmann, J. and Schüttrumpf, H.: floodGAN – A deep learning-based model for rapid urban flood forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4730, https://doi.org/10.5194/egusphere-egu22-4730, 2022.

EGU22-4900 | Presentations | ITS2.5/NH10.8

A modular and scalable workflow for data-driven modelling of shallow landslide susceptibility 

Ann-Kathrin Edrich, Anil Yildiz, Ribana Roscher, and Julia Kowalski

The spatial impact of a single shallow landslide is small compared to a deep-seated, impactful failure and hence its damage potential localized and limited. Yet, their higher frequency of occurrence and spatio-temporal correlation in response to external triggering events such as strong precipitation, nevertheless result in dramatic risks for population, infrastructure and environment. It is therefore essential to continuously investigate and analyze the spatial hazard that shallow landslides pose. Its visualisation through regularly-updated, dynamic hazard maps can be used by decision and policy makers. Even though a number of data-driven approaches for shallow landslide hazard mapping exist, a generic workflow has not yet been described. Therefore, we introduce a scalable and modular machine learning-based workflow for shallow landslide hazard prediction in this study. The scientific test case for the development of the workflow investigates the rainfall-triggered shallow landslide hazard in Switzerland. A benchmark dataset was compiled based on a historic landslide database as presence data, as well as a pseudo-random choice of absence locations, to train the data-driven model. Features included in this dataset comprise at the current stage 14 parameters from topography, soil type, land cover and hydrology. This work also focuses on the investigation of a suitable approach to choose absence locations and the influence of this choice on the predicted hazard as their influence is not comprehensively studied. We aim at enabling time-dependent and dynamic hazard mapping by incorporating time-dependent precipitation data into the training dataset with static features. Inclusion of temporal trigger factors, i.e. rainfall, enables a regularly-updated landslide hazard map based on the precipitation forecast. Our approach includes the investigation of a suitable precipitation metric for the occurrence of shallow landslides at the absence locations based on the statistical evaluation of the precipitation behavior at the presence locations. In this presentation, we will describe the modular workflow as well as the benchmark dataset and show preliminary results including above mentioned approaches to handle absence locations and time-dependent data.

How to cite: Edrich, A.-K., Yildiz, A., Roscher, R., and Kowalski, J.: A modular and scalable workflow for data-driven modelling of shallow landslide susceptibility, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4900, https://doi.org/10.5194/egusphere-egu22-4900, 2022.

EGU22-6568 | Presentations | ITS2.5/NH10.8

Harnessing Machine Learning and Deep Learning applications for climate change risk assessment: a survey 

Davide Mauro Ferrario, Elisa Furlan, Silvia Torresan, Margherita Maraschini, and Andrea Critto

In the last years there has been a growing interest around Machine Learning (ML) in climate risk/ multi-risk assessment, steered mainly by the growing amount of data available and the reduction of associated computational costs. Extracting information from spatio-temporal data is critically important for problems such as extreme events forecasting and assessing risks and impacts from multiple hazards. Typical challenges in which AI and ML are now being applied require understanding the dynamics of complex systems, which involve many features with non-linear relations and feedback loops, analysing the effects of phenomena happening at different time scales, such as slow-onset events (sea level rise) and short-term episodic events (storm surges, floods) and estimating uncertainties of long-term predictions and scenarios. 
While in the last years there were many successful applications of AI/ML, such as Random Forest or Long-Short Term Memory (LSTM) in floods and storm surges risk assessment, there are still open questions and challenges that need to be addressed. In fact, there is a lack of data for extreme events and Deep Learning (DL) algorithms often need huge amounts of information to disentangle the relationships among hazard, exposure and vulnerability factors contributing to the occurrence of risks. Moreover, the spatio-temporal resolution can be highly irregular and need to be reconstructed to produce accurate and efficient models. For example, using data from meteorological ground stations can offer accurate datasets with fine temporal resolution, but with an irregular distribution in the spatial dimension; on the other hand, leveraging on satellite images can give access to more spatially refined data, but often lacking the temporal dimension (fewer events available to due atmospheric disturbances). 
Several techniques have been applied, ranging from classical multi-step forecasting, state-space and Hidden Markov models to DL techniques, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). ANN and Deep Generative Models (DGM) have been used to reconstruct spatio-temporal grids and modelling continuous time-series, CNN to exploit spatial relations, Graph Neural Networks (GNN) to extract multi-scale localized spatial feature and RNN and LSTM for multi-scale time series prediction.  
To bridge these gaps, an in-depth state-of-the-art review of the mathematical and computer science innovations in ML/DL techniques that could be applied to climate /multi-risk assessment was undertaken. The review focuses on three possible ML/DL applications: analysis of spatio-temporal dynamics of risk factors, with particular attention on applications for irregular spatio-temporal grids; multivariate analysis for multi-hazard interactions and multiple risk assessment endpoints; analysis of future scenarios under climate change. We will present the main outcomes of the scientometric and systematic review of publications across the 2000- 2021 timeframe, which allowed us to: i) summarize keywords and word co-occurrence networks, ii) highlight linkages, working relations and co-citation clusters, iii) compare ML and DL approaches with classical statistical techniques and iv) explore applications at the forefront of the risk assessment community.

How to cite: Ferrario, D. M., Furlan, E., Torresan, S., Maraschini, M., and Critto, A.: Harnessing Machine Learning and Deep Learning applications for climate change risk assessment: a survey, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6568, https://doi.org/10.5194/egusphere-egu22-6568, 2022.

EGU22-6576 | Presentations | ITS2.5/NH10.8

Swept Away: Flooding and landslides in Mexican poverty nodes 

Silvia García, Raul Aquino, and Walter Mata

Natural disasters should be examined within a risk-perspective framework where both natural threat and vulnerability are considered as intricate components of an extremely complex equation. The trend toward more frequent floods and landslides in Mexico in recent decades is not only the result of more intense rainfall, but also a consequence of increased vulnerability. As a multifactorial element, vulnerability is a low-frequency modulating factor of the risk dynamics to intense rainfall. It can be described in terms of physical, social, and economical factors. For instance, deforested or urbanized areas are the physical and social factors that lead to the deterioration of watersheds and an increased vulnerability to intense rains. Increased watershed vulnerability due to land-cover changes is the primary factor leading to more floods, particularly over pacific Mexico. ln some parts of the country, such as Colima, the increased frequency of intense rainfall (i.e., natural hazard) associated with high-intensity tropical cyclones and hurricanes is the leading cause of more frequent floods.

 

In this research an intelligent rain management-system is presented. The object is built to forecast and to simulate the components of risk, to stablish communication between rescue/aid teams and to help in preparedness activities (training). Detection, monitoring, analysis and forecasting of the hazards and scenarios that promote floods and landslides, is the main task. The developed methodology is based on a database that permits to relate heavy rainfall measurements with changes in land cover and use, terrain slope, basin compactness and communities’ resilience as key vulnerability factors. A neural procedure is used for the spatial definition of exposition and susceptibility (intrinsic and extrinsic parameters) and Machine Learning techniques are applied to find the If-Then relationships. The capability of the intelligent model for Colima, Mexico was tested by comparing the observed and modeled frequency of landslides and floods for ten years period. It was found that over most of the Mexican territory, more frequent floods are the result of a rapid deforestation process and that landslides and their impact on communities are directly related to the unauthorized growth of populations in high geo-risk areas (due to forced migration because of violence or extreme poverty) and the development of civil infrastructure (mainly roads) with a high impact on the natural environment. Consequently, the intelligent rain-management system offers the possibility to redesign and to plan the land use and the spatial distribution of poorest communities.

How to cite: García, S., Aquino, R., and Mata, W.: Swept Away: Flooding and landslides in Mexican poverty nodes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6576, https://doi.org/10.5194/egusphere-egu22-6576, 2022.

EGU22-6690 | Presentations | ITS2.5/NH10.8

A machine learning-based ensemble model for estimation of seawater quality parameters in coastal area 

Xiaotong Zhu, Jinhui Jeanne Huang, Hongwei Guo, Shang Tian, and Zijie Zhang

The precise estimation of seawater quality parameters is crucial for decision-makers to manage coastal water resources. Although various machine learning (ML)-based algorithms have been developed for seawater quality retrieval using remote sensing technology, the performance of these models in the application of specific regions remains significant uncertainty due to the different properties of coastal waters. Moreover, the prediction results of these ML models are unexplainable. To address these problems, an ML-based ensemble model was developed in this study. The model was applied to estimate chlorophyll-a (Chla), turbidity, and dissolved oxygen (DO) based on Sentinel-2 satellite imagery in Shenzhen Bay, China. The optimal input features for each seawater quality parameter were selected from the nine simulation scenarios which generated from eight spectral bands and six spectral indices. A local explanation method called SHapley Additive exPlanations (SHAP) was introduced to quantify the contributions of various features to the predictions of the seawater quality parameters. The results suggested that the ensemble model with feature selection enhanced the performance for three types of seawater quality parameters estimations (The errors were 1.7%, 1.5%, and 0.02% for Chla, turbidity, and DO, respectively). Furthermore, the reliability of the model performance was further verified for mapping the spatial distributions of water quality parameters during the model validation period. The spatial-temporal patterns of seawater quality parameters revealed that the distributions of seawater quality were mainly influenced by estuary input. Correlation analysis demonstrated that air temperature (Temp) and average air pressure (AAP) exhibited the closest relationship with Chla. The DO was most relevant with Temp, and turbidity was not sensitive to Temp, average wind speed (AWS), and AAP. This study enhanced the prediction capability of seawater quality parameters and provided a scientific coastal waters management approach for decision-makers.

How to cite: Zhu, X., Huang, J. J., Guo, H., Tian, S., and Zhang, Z.: A machine learning-based ensemble model for estimation of seawater quality parameters in coastal area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6690, https://doi.org/10.5194/egusphere-egu22-6690, 2022.

EGU22-6758 | Presentations | ITS2.5/NH10.8

AI-enhanced Integrated Alert System for effective Disaster Management 

Pankaj Kumar Dalela, Saurabh Basu, Sandeep Sharma, Anugandula Naveen Kumar, Suvam Suvabrata Behera, and Rajkumar Upadhyay

Effective communication systems supported by Information and Communication Technologies (ICTs) are integral and important components for ensuring comprehensive disaster management. Continuous warning monitoring, prediction, dissemination, and response coordination along with public engagement by utilizing the capabilities of emerging technologies including Artificial Intelligence (AI) can assist in building resilience and ensuring Disaster Risk Reduction. Thus, for effective disaster management, an Integrated Alert System is proposed which encapsulates all concerned disaster management authorities, alert forecasting and disseminating agencies under a single umbrella for alerting the targeted public through various communication channels. Enhancing the capabilities of the system through AI, its integral part includes the data-driven citizen-centric Decision Support System which can help disaster managers by performing complete impact assessment of disaster events through configuration of decision models developed by learning inter-relationships of different parameters. The system needs to be capable of identification of possible communication means to address community outreach, prediction of scope of alert, providing influence of alert message on targeted vulnerable population, performing crowdsourced data analysis, evaluating disaster impact through threat maps and dashboards, and thereby, providing complete analysis of the disaster event in all phases of disaster management. The system aims to address challenges including limited communication channels utilization and audience reach, language differences, and lack of ground information in decision making posed by current systems by utilizing the latest state of art technologies.

How to cite: Dalela, P. K., Basu, S., Sharma, S., Kumar, A. N., Behera, S. S., and Upadhyay, R.: AI-enhanced Integrated Alert System for effective Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6758, https://doi.org/10.5194/egusphere-egu22-6758, 2022.

Main purpose of current research article is to present latest findings on automatic methods of manipulating social network data for developing seismic intensity maps. As case study the author selected the 2020 Samos earthquake event (Mw= 7, 30 October 2020, Greece). That earthquake event had significant consequences to the urban environment along with 2 deaths and 19 injuries. Initially an automatic approach, presented recently in the international literature was applied producing thus seismic intensity maps from tweets. Furthermore, some initial findings regarding the use of machine learning in various parts of the automatic methodology were presented along with potential of using photos posted in social networks. The data used were several thousands tweets and instagram posts.The results, provide vital findings in enriching data sources, data types, and effective rapid processing.

How to cite: Arapostathis, S. G.: The Samos earthquake event (Mw = 7, 30 October 2020, Greece) as case study for applying machine learning on texts and photos scraped from social networks for developing seismic intensity maps., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7129, https://doi.org/10.5194/egusphere-egu22-7129, 2022.

EGU22-7308 | Presentations | ITS2.5/NH10.8

Building an InSAR-based database to support geohazard risk management by exploiting large ground deformation datasets 

Marta Béjar-Pizarro, Pablo Ezquerro, Carolina Guardiola-Albert, Héctor Aguilera Alonso, Margarita Patricia Sanabria Pabón, Oriol Monserrat, Anna Barra, Cristina Reyes-Carmona, Rosa Maria Mateos, Juan Carlos García López Davalillo, Juan López Vinielles, Guadalupe Bru, Roberto Sarro, Jorge Pedro Galve, Roberto Tomás, Virginia Rodríguez Gómez, Joaquín Mulas de la Peña, and Gerardo Herrera

The detection of areas of the Earth’s surface experiencing active deformation processes and the identification of the responsible phenomena (e.g. landslides activated after rainy events, subsidence due to groundwater extraction in agricultural areas, consolidation settlements, instabilities in active or abandoned mines) is critical for geohazard risk management and ultimately to mitigate the unwanted effects on the affected populations and the environment.

This will now be possible at European level thanks to the Copernicus European Ground Motion Service (EGMS), which will provide ground displacement measurements derived from time series analyses of Sentinel-1 data, using Interferometric Synthetic Aperture Radar (InSAR). The EGMS, which will be available to users in the first quarter of 2022 and will be updated annually, will be especially useful to identify displacements associated to landslides, subsidence and deformation of infrastructure.  To fully exploit the capabilities of this large InSAR datasets, it is fundamental to develop automatic analysis tools, such as machine learning algorithms, which require an InSAR-derived deformation database to train and improve them.  

Here we present the preliminary InSAR-derived deformation database developed in the framework of the SARAI project, which incorporates the previous InSAR results of the IGME-InSARlab and CTTC teams in Spain. The database contains classified points of measurement with the associated InSAR deformation and a set of environmental variables potentially correlated with the deformation phenomena, such as geology/lithology, land-surface slope, land cover, meteorological data, population density, and inventories such as the mining registry, the groundwater database, and the IGME’s land movements database (MOVES). We discuss the main strategies used to identify and classify pixels and areas that are moving, the covariables used and some ideas to improve the database in the future. This work has been developed in the framework of project PID2020-116540RB-C22 funded by MCIN/ AEI /10.13039/501100011033 and e-Shape project, with funding from the European Union’s Horizon 2020 research and innovation program under grant agreement 820852.

How to cite: Béjar-Pizarro, M., Ezquerro, P., Guardiola-Albert, C., Aguilera Alonso, H., Sanabria Pabón, M. P., Monserrat, O., Barra, A., Reyes-Carmona, C., Mateos, R. M., García López Davalillo, J. C., López Vinielles, J., Bru, G., Sarro, R., Galve, J. P., Tomás, R., Rodríguez Gómez, V., Mulas de la Peña, J., and Herrera, G.: Building an InSAR-based database to support geohazard risk management by exploiting large ground deformation datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7308, https://doi.org/10.5194/egusphere-egu22-7308, 2022.

EGU22-7313 | Presentations | ITS2.5/NH10.8

The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach 

Kathrin Lisa Kapper, Stefan Muckenhuber, Thomas Goelles, Andreas Trügler, Muhamed Kuric, Jakob Abermann, Jakob Grahn, Eirik Malnes, and Wolfgang Schöner

Each year, snow avalanches cause many casualties and tremendous damage to infrastructure. Prevention and mitigation mechanisms for avalanches are established for specific regions only. However, the full extent of the overall avalanche activity is usually barely known as avalanches occur in remote areas making in-situ observations scarce. To overcome these challenges, an automated avalanche detection approach using the Copernicus Sentinel-1 synthetic aperture radar (SAR) data has recently been introduced for some test regions in Norway. This automated detection approach from SAR images is faster and gives more comprehensive results than field-based detection provided by avalanche experts. The Sentinel-1 programme has provided - and continues to provide - free of charge, weather-independent, and high-resolution satellite Earth observations since its start in 2014. Recent advances in avalanche detection use deep learning algorithms to improve the detection rates. Consequently, the performance potential and the availability of reliable training data make learning-based approaches an appealing option for avalanche detection.  

         In the framework of the exploratory project SnowAV_AT, we intend to build the basis for a state-of-the-art automated avalanche detection system for the Austrian Alps, including a "best practice" data processing pipeline and a learning-based approach applied to Sentinel-1 SAR images. As a first step towards this goal, we have compiled several labelled training datasets of previously detected avalanches that can be used for learning. Concretely, these datasets contain 19000 avalanches that occurred during a large event in Switzerland in January 2018, around 6000 avalanches that occurred in Switzerland in January 2019, and around 800 avalanches that occurred in Greenland in April 2016. The avalanche detection performance of our learning-based approach will be quantitatively evaluated against held-out test sets. Furthermore, we will provide qualitative evaluations using SAR images of the Austrian Alps to gauge how well our approach generalizes to unseen data that is potentially differently distributed than the training data. In addition, selected ground truth data from Switzerland, Greenland and Austria will allow us to validate the accuracy of the detection approach. As a particular novelty of our work, we will try to leverage high-resolution weather data and combine it with SAR images to improve the detection performance. Moreover, we will assess the possibilities of learning-based approaches in the context of the arguably more challenging avalanche forecasting problem.

How to cite: Kapper, K. L., Muckenhuber, S., Goelles, T., Trügler, A., Kuric, M., Abermann, J., Grahn, J., Malnes, E., and Schöner, W.: The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7313, https://doi.org/10.5194/egusphere-egu22-7313, 2022.

Flood events cause substantial damage to infrastructure and disrupt livelihoods. There is a need for the development of an innovative, open-access and real-time disaster map pipeline which is automatically initiated at the time of a flood event to highlight flooded regions, potential damage and vulnerable communities. This can help in directing resources appropriately during and after a disaster to reduce disaster risk. To implement this pipeline, we explored the integration of three heterogeneous data sources which include remote sensing data, social sensing data and geospatial sensing data to guide disaster relief and response. Remote sensing through satellite imagery is an effective method to identify flooded areas where we utilized existing deep learning models to develop a pipeline to process both optical and radar imagery. Whilst this can offer situational awareness right after a disaster, satellite-based flood extent maps lack important contextual information about the severity of structural damage or urgent needs of affected population. This is where the potential of social sensing through microblogging sites comes into play as it provides insights directly from eyewitnesses and affected people in real-time. Whilst social sensing data is advantageous, these streams are usually extremely noisy where there is a need to build disaster relevant taxonomies for both text and images. To develop a disaster taxonomy for social media texts, we conducted literature review to better understand stakeholder information needs. The final taxonomy consisted of 30 categories organized among three high-level classes. This built taxonomy was then used to label a large number of tweet texts (~ 10,000) to train machine learning classifiers so that only relevant social media texts are visualized on the disaster map. Moreover, a disaster object taxonomy for social media images was developed in collaboration with a certified emergency manager and trained volunteers from Montgomery County, MD Community Emergency Response Team. In total, 106 object categories were identified and organized as a hierarchical  taxonomy with  three high-level classes and 10 sub-classes. This built taxonomy will be used to label a large set of disaster images for object detection so that machine learning classifiers can be trained to effectively detect disaster relevant objects in social media imagery. The wide perspective provided by the satellite view combined with the ground-level perspective from locally collected textual and visual information helped us in identifying three types of signals: (i) confirmatory signals from both sources, which puts greater confidence that a specific region is flooded, (ii) complementary signals that provide different contextual information including needs and requests, disaster impact or damage reports and situational information, and (iii) novel signals when both data sources do not overlap and provide unique information. We plan to fuse the third component, geospatial sensing, to perform flood vulnerability analysis to allow easy identification of areas/zones that are most vulnerable to flooding. Thus, the fusion of remote sensing, social sensing and geospatial sensing for rapid flood mapping can be a powerful tool for crisis responders.

How to cite: Ofli, F., Akhtar, Z., Sadiq, R., and Imran, M.: Triangulation of remote sensing, social sensing, and geospatial sensing for flood mapping, damage estimation, and vulnerability assessment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7561, https://doi.org/10.5194/egusphere-egu22-7561, 2022.

EGU22-7711 | Presentations | ITS2.5/NH10.8

Global sensitivity analyses to characterize the risk of earth fissures in subsiding basins 

Yueting Li, Claudia Zoccarato, Noemi Friedman, András Benczúr, and Pietro Teatini

Earth fissure associated with groundwater pumping is a severe geohazard jeopardizing several subsiding basins generally in arid countries (e.g., Mexico, Arizona, Iran, China, Pakistan). Up to 15 km long, 1–2 m wide, 15–20 m deep, and more than 2 m vertically dislocated fissures have been reported. A common geological condition favoring the occurrence of earth fissures is the presence of shallow bedrock ridge buried by compacting sedimentary deposits. This study aims to improve the understanding of this mechanism by evaluating the effects of various factors on the risk of fissure formation and development. Several parameters playing a role in the fissure occurrence have been considered, such as the shape of the bedrock ridge, the aquifer thickness, the pressure depletion in the aquifer system, and its compressibility. A realistic case is developed where the characteristics of fissure like displacements and stresses are quantified with aid of a numerical approach based on finite elements for the continuum and interface elements for the discretization of the fissures. Modelling results show that the presence of bedrock ridge causes tension accumulation around its tip and results in fissure opening from land surface downward after long term piezometry depletion. Different global sensitivity analysis methods are applied to measure the importance of each single factor (or group of them) on the quantity of interest, i.e., the fissure opening. A conventional variance-based method is first presented with Sobol indices computed from Monte Carlo simulations, although its accuracy is only guaranteed with a high number of forward simulations. As alternatives, generalized polynomial chaos expansion and gradient boosting tree are introduced to approximate the forward model and implement the corresponding sensitivity assessment at a significantly reduced computational cost. All the measures provide similar results that highlight the importance of bedrock ridge in earth fissuring. Generally, the steeper bedrock ridge the higher the risk of significant fissure opening. Pore pressure depletion is secondarily key factor which is essential for fissure formation.

How to cite: Li, Y., Zoccarato, C., Friedman, N., Benczúr, A., and Teatini, P.: Global sensitivity analyses to characterize the risk of earth fissures in subsiding basins, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7711, https://doi.org/10.5194/egusphere-egu22-7711, 2022.

Induced subsidence and seismicity caused by the production of hydrocarbons in the Groningen gas field (the Netherlands) is a widely known issue facing this naturally aseismic region (Smith et al., 2019). Extraction reduces pore-fluid pressure leading to accumulation of small elastic and inelastic strains and an increase in effective vertical stress driving compaction of reservoir sandstones.

Recent studies (Pijnenburg et al., 2019a, b and Verberne et al., 2021) identify grain-scale deformation of intergranular and grain-coating clays as largely responsible for accommodating (permanent) inelastic deformation at small strains relevant to production (≤1.0%). However, their distribution, microstructure, abundance, and contribution to inelastic deformation remains unconstrained, presenting challenges when evaluating grain-scale deformation mechanisms within a natural system. Traditional methods of mineral identification are costly, labor-intensive, and time-consuming. Digital imaging coupled with machine-learning-driven segmentation is necessary to accelerate the identification of clay microstructures and distributions within reservoir sandstones for later large-scale analysis and geomechanical modeling.

We performed digital imaging on thin-sections taken from core recovered from the highly-depleted Zeerijp ZRP-3a well located at the most seismogenic part of the field. The core was kindly made available by the field operator, NAM. Optical digital images were acquired using the Zeiss AxioScan optical light microscope at 10x magnification with a resolution of 0.44µm and compared to backscattered electron (BSE) digital images from the Zeiss EVO 15 Scanning Electron Microscope (SEM) at varying magnifications with resolutions ranging from 0.09µm - 2.24 µm. Digital images were processed in ilastik, an interactive machine-learning-based toolkit for image segmentation that uses a Random Forest classifier to separate clays from a digital image (Berg et al., 2019).

Comparisons between segmented optical and BSE digital images indicate that image resolution is the main limiting factor for successful mineral identification and image segmentation, especially for clay minerals. Lower resolution digital images obtained using optical light microscopy may be sufficient to segment larger intergranular/pore-filling clays, but higher resolution BSE images are necessary to segment smaller micron to submicron-sized grain-coating clays. Comparing the same segmented optical image (~11.5% clay) versus BSE image (~16.3% clay) reveals an error of ~30%, illustrating the potential of underestimating the clay content necessary for geomechanical modeling.

Our analysis shows that coupled automated electron microscopy with machine-learning-driven image segmentation has the potential to provide statistically relevant and robust information to further constrain the role of clay films on the compaction behavior of reservoir rocks.

 

References:

Berg, S. et al., Nat Methods 16, 1226–1232 (2019).

(NAM) Nederlandse Aardolie Maatschappij BV (2015).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124 (2019a).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124, 5254–5282. (2019b)

Smith, J. D. et al., Journal of Geophysical Research: Solid Earth, 124, 6165–6178. (2019)

Verberne, B. A. et al., Geology, 49 (5): 483–487. (2020)

How to cite: Vogel, H., Amiri, H., Plümper, O., Hangx, S., and Drury, M.: Applications of digital imaging coupled with machine-learning for aiding the identification, analysis, and quantification of intergranular and grain-coating clays within reservoirs rocks., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7915, https://doi.org/10.5194/egusphere-egu22-7915, 2022.

EGU22-9406 | Presentations | ITS2.5/NH10.8

Building exposure datasets using street-level imagery and deep learning object detection models 

Luigi Cesarini, Rui Figueiredo, Xavier Romão, and Mario Martina

The built environment is constantly under the threat of natural hazards, and climate change will only exacerbate such perils. The assessment of natural hazard risk requires exposure models representing the characteristics of the assets at risk, which are crucial to subsequently estimate damage and impacts of a given hazard to such assets. Studies addressing exposure assessment are expanding, in particular due to technological progress. In fact, several works are introducing data collected from volunteered geographic information (VGI), user-generated content, and remote sensing data. Although these methods generate large amounts of data, they typically require a time-consuming extraction of the necessary information. Deep learning models are particularly well suited to perform this labour-intensive task due to their ability to handle massive amount of data.

In this context, this work proposes a methodology that connects VGI obtained from OpenStreetMap (OSM), street-level imagery from Google Street View (GSV) and deep learning object detection models to create an exposure dataset of electrical transmission towers, an asset particularly vulnerable to strong winds among other perils (i.e., ice loads and earthquakes). The main objective of the study is to establish and demonstrate a complete pipeline that first obtains the locations of transmission towers from the power grid layer of OSM’s world infrastructure, and subsequently assigns relevant features of each tower based on the classification returned from an object detection model over street-level imagery of the tower, obtained from GSV.

The study area for the initial application of the methodology is the Porto district (Portugal), which has an area of around 1360 km2 and 5789 transmission towers. The area was found to be representative given its diverse land use, containing both densely populated settlements and rural areas, and the different types of towers that can be found. A single-stage detector (YOLOv5) and a two-stage detector (Detectron2) were trained and used to perform identification and classification of towers. The first task was used to test the ability of a model to recognize whether a tower is present in an image, while the second task assigned a category to each tower based on a taxonomy derived from a compilation of the most used type of towers. Preliminary results on the test partition of the dataset are promising. For the identification task, YOLOv5 returned a mean average precision (mAP) of 87% for an intersection over union (IoU) of 50%, while Detectron2 reached a mAP of 91% for the same IoU. In the classification problem, the performances were also satisfactory, particularly when the models were trained on a sufficient number of images per class. 

Additional analyses of the results can provide insights on the types of areas for which the methodology is more reliable. For example, in remote areas, the long distance of a tower to the street might prevent the object to be identified in the image. Nevertheless, the proposed methodology can in principle be used to generate exposure models of transmission towers at large spatial scales in areas for which the necessary datasets are available.

 

How to cite: Cesarini, L., Figueiredo, R., Romão, X., and Martina, M.: Building exposure datasets using street-level imagery and deep learning object detection models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9406, https://doi.org/10.5194/egusphere-egu22-9406, 2022.

EGU22-10276 | Presentations | ITS2.5/NH10.8

Weather and climate in the AI-supported early warning system DAKI-FWS 

Elena Xoplaki, Andrea Toreti, Florian Ellsäßer, Muralidhar Adakudlu, Eva Hartmann, Niklas Luther, Johannes Damster, Kim Giebenhain, Andrej Ceglar, and Jackie Ma

The project DAKI-FWS (BMWi joint-project “Data and AI-supported early warning system to stabilise the German economy”; German: “Daten- und KI-gestütztes Frühwarnsystem zur Stabilisierung der deutschen Wirtschaft”) develops an early warning system (EWS) to strengthen economic resilience in Germany. The EWS enables better characterization of the development and course of pandemics or hazardous climate extreme events and can thus protect and support lives, jobs, land and infrastructures.

The weather and climate modules of the DAKI-FWS use state-of-the-art seasonal forecasts for Germany and apply innovative AI-approaches to prepare very high spatial resolution simulations. These are used for the climate-related practical applications of the project, such as pandemics or subtropical/tropical diseases, and contribute to the estimation of the outbreak and evolution of health crises. Further, the weather modules of the EWS objectively identify weather and climate extremes, such as heat waves, storms and droughts, as well as compound extremes from a large pool of key data sets. The innovative project work is complemented by the development and AI-enhancement of the European Flood Awareness System model, LISFLOOD, and forecasting system for Germany at very high spatial resolution. The model combined with the high-end output of the seasonal forecast prepares high-resolution, accurate flood risk assessment. The final output of the EWS and hazard maps not only support adaptation, but they also increase preparedness providing a time horizon of several months ahead, thus increasing the resilience of economic sectors to impacts of the ongoing anthropogenic climate change. The weather and climate modules of the EWS provide economic, political, and administrative decision-makers and the general public with evidence on the probability of occurrence, intensity and spatial and temporal extent of extreme events as well as with critical information during a disaster.

How to cite: Xoplaki, E., Toreti, A., Ellsäßer, F., Adakudlu, M., Hartmann, E., Luther, N., Damster, J., Giebenhain, K., Ceglar, A., and Ma, J.: Weather and climate in the AI-supported early warning system DAKI-FWS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10276, https://doi.org/10.5194/egusphere-egu22-10276, 2022.

Landslide inventories are essential for landslide susceptibility mapping, hazard modelling, and further risk mitigation management. For decades, experts and organisations worldwide have preferred manual visual interpretation of satellite and aerial images. However, there are various problems associated with manual inventories, such as manual extraction of landslide borders and their representation with polygons, which is a subjective process.  Manual delineation is affected by the applied methodology, the preferences of the experts and interpreters, and how much time and effort are invested in the inventory generating process. In recent years, a vast amount of research related to semi-automated and automatic mapping of landslide inventories has been carried out to overcome these issues. The automatic generation of landslide inventories using Artificial Intelligence (AI) techniques is still in its early phase as currently there is no published research that can create a ground truth representation of landslide situation after a landslide triggering event. The evaluation metrics in recent literature show a range of 50-80% of F1-score in terms of landslide boundary delineation using AI-based models. However, very few studies claim to have achieved more than 80% F1 score with the exception of those employing the testing of their model evaluation in the same study area. Therefore, there is still a research gap between the generation of AI-based landslide inventories and their usability for landslide hazard and risk studies. In this study, we explore several inventories developed by AI and manual delineation and test their usability for assessing landslide hazard.

How to cite: Meena, S. R., Floris, M., and Catani, F.: Can landslide inventories developed by artificial intelligence substitute manually delineated inventories for landslide hazard and risk studies?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11422, https://doi.org/10.5194/egusphere-egu22-11422, 2022.

EGU22-11787 | Presentations | ITS2.5/NH10.8

Explainable deep learning for wildfire danger estimation 

Michele Ronco, Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, and Nuno Carvalhais

Deep learning models have been remarkably successful in a number of different fields, yet their application to disaster management is obstructed by the lack of transparency and trust which characterises artificial neural networks. This is particularly relevant in the field of Earth sciences where fitting is only a tiny part of the problem, and process understanding becomes more relevant [1,2]. In this regard, plenty of eXplainable Artificial Intelligence (XAI) algorithms have been proposed in the literature over the past few years [3]. We suggest that combining saliency maps with interpretable approximations, such as LIME, is useful to extract complementary insights and reach robust explanations. We address the problem of wildfire forecasting for which interpreting the model's predictions is of crucial importance to put into action effective mitigation strategies. Daily risk maps have been obtained by training a convolutional LSTM with ten years of data of spatio-temporal features, including weather variables, remote sensing indices and static layers for land characteristics [4]. We show how the usage of XAI allows us to interpret the predicted fire danger, thereby shortening the gap between black-box approaches and disaster management.

 

[1] Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences

Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors)

Wiley \& Sons 2021

[2] Deep learning and process understanding for data-driven Earth System Science

Reichstein, M. and Camps-Valls, G. and Stevens, B. and Denzler, J. and Carvalhais, N. and Jung, M. and Prabhat

Nature 566 :195-204, 2019

[3] Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

 Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller (Editors)

LNCS, volume 11700, Springer 

[4] Deep Learning Methods for Daily Wildfire Danger Forecasting

Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Michele Ronco, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, Nuno Carvalhais

arXiv: 2111.02736


 

How to cite: Ronco, M., Prapas, I., Kondylatos, S., Papoutsis, I., Camps-Valls, G., Fernández-Torres, M.-Á., Piles Guillem, M., and Carvalhais, N.: Explainable deep learning for wildfire danger estimation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11787, https://doi.org/10.5194/egusphere-egu22-11787, 2022.

EGU22-11872 | Presentations | ITS2.5/NH10.8

Recent Advances in Deep Learning for Spatio-Temporal Drought Monitoring, Forecasting and Model Understanding 

María González-Calabuig, Jordi Cortés-Andrés, Miguel-Ángel Fernández-Torres, and Gustau Camps-Valls

Droughts constitute one of the costliest natural hazards and have seriously destructive effects on the ecological environment, agricultural production and socio-economic conditions. Their elusive and subjective definition, due to the complex physical, chemical and biological processes of the Earth system they involve, makes their management an arduous challenge to researchers, as well as decision and policy makers. We present here our most recent advances in machine learning models in three complementary lines of research about droughts: monitoring, forecasting and understanding. While monitoring or detection is about gaining the time series of drought maps and discovering underlying patterns and correlations, forecasting or prediction is to anticipate future droughts. Last but not least, understanding or explaining models by means of expert-comprehensible representations is equally important as accurately addressing these tasks, especially for their deployment in real scenarios. Thanks to the emergence and success of deep learning, all of these tasks can be tackled by the design of spatio-temporal data-driven approaches built on the basis of climate variables (soil moisture, precipitation, temperature, vegetation health, etc.) and/or satellite imagery. The possibilities are endless, from the design of convolutional architectures and attention mechanisms to the use of generative models such as Normalizing Flows (NFs) or Generative Adversarial Networks (GANs), trained both in a supervised and unsupervised manner, among others. Different application examples in Europe from 2003 onwards are provided, with the aim of reflecting on the possibilities of the strategies proposed, and also of foreseeing alternatives and future lines of development. For that purpose, we make use of several mesoscale (1 km) spatial and 8 days temporal resolution variables included in the Earth System Data Cube (ESDC) [Mahecha et al., 2020] for drought detection, while high resolution (20 m, 5 days) Sentinel-2 data cubes, extracted from the extreme summer track in EarthNet2021 [Requena-Mesa et al., 2021], are considered for forecasting.

 

References

Mahecha, M. D., Gans, F., Brandt, G., Christiansen, R., Cornell, S. E., Fomferra, N., ... & Reichstein, M. (2020). Earth system data cubes unravel global multivariate dynamics. Earth System Dynamics, 11(1), 201-234.

Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).

How to cite: González-Calabuig, M., Cortés-Andrés, J., Fernández-Torres, M.-Á., and Camps-Valls, G.: Recent Advances in Deep Learning for Spatio-Temporal Drought Monitoring, Forecasting and Model Understanding, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11872, https://doi.org/10.5194/egusphere-egu22-11872, 2022.

EGU22-12432 | Presentations | ITS2.5/NH10.8

Building wildfire intelligence at the edge: bridging the gap from development to deployment 

Maria João Sousa, Alexandra Moutinho, and Miguel Almeida

The increased frequency, intensity, and severity of wildfire events in several regions across the world has highlighted several disaster response infrastructure hindrances that call for enhanced intelligence gathering pipelines. In this context, the interest in the use of unmanned aerial vehicles for surveillance and active fire monitoring has been growing in recent years. However, several roadblocks challenge the implementation of these solutions due to their high autonomy requirements and energy-constrained nature. For these reasons, the artificial intelligence development focus on large models hampers the development of models suitable for deployment onboard these platforms. In that sense, while artificial intelligence approaches can be an enabling technology that can effectively scale real-time monitoring services and optimize emergency response resources, the design of these systems imposes: (i) data requirements, (ii) computing constraints and (iii) communications limitations. Here, we propose a decentralized approach, reflecting upon these three vectors.

Data-driven artificial intelligence is central to both handle multimodal sensor data in real-time and to annotate large amounts of data collected, which are necessary to build robust safety-critical monitoring systems. Nevertheless, these two objectives have distinct implications computation-wise, because the first must happen on-board, whereas the second can leverage higher processing capabilities off-board. While autonomy of robotic platforms drives mission performance, being a key reason for the need for edge computing of onboard sensor data, the communications design is essential to mission endurance as relaying large amounts of data in real-time is unfeasible energy-wise. 

For these reasons, real-time processing and data annotation must be tackled in a complimentary manner, instead of the general practice of only targeting overall accuracy improvement. To build wildfire intelligence at the edge, we propose developments on two tracks of solutions: (i) data annotation and (ii) on the edge deployment. The need for considerable effort in these two avenues stems from both having very distinct development requirements and performance evaluation metrics. On the one hand, improving data annotation capacity is essential to build high quality databases that can provide better sources for machine learning. On the other hand, for on the edge deployment the development architectures need to compromise on robustness and architectural parsimony in order to be efficient for edge processing. Whereas the first objective is driven foremost by accuracy, the second goal must emphasize timeliness.

Acknowledgments
This work was supported by FCT – Fundação para a Ciência e a Tecnologia, I.P., through IDMEC, under project Eye in the Sky, PCIF/SSI/0103/2018, and through IDMEC, under LAETA, project UIDB/50022/2020. M. J. Sousa acknowledges the support from FCT, through the Ph.D. Scholarship SFRH/BD/145559/2019, co-funded by the European Social Fund (ESF).

How to cite: Sousa, M. J., Moutinho, A., and Almeida, M.: Building wildfire intelligence at the edge: bridging the gap from development to deployment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12432, https://doi.org/10.5194/egusphere-egu22-12432, 2022.

The interest of this research work is focused on the detection of possible pre-seismic perturbations related to medium-sized earthquakes (5≤Mw≤5.9) occurring in the upper ionized atmosphere (about 350 km above the Earth, ionospheric F2-region). For this specific purpose, we have exploited several geodetic data, derived through signal processing of dual-frequency permanent ground-based Global Positioning System (GPS)/Global Navigation Satellite Systems (GNSS) receivers, located at the Euro-Mediterranean basin.

To find out whether the ionospheric F2-layer is responsive to the energy released during the preparation periods of medium magnitude earthquakes, the Lorca seismic event (May 11th, 2011, Mw 5.1, Murcia region) was taken as an initial sample. For this shallow-focus earthquake (4 km depth), the longitude-latitude coordinates of the epicenter are 1.7114° W, 37.7175° N. As result, modeling regional ionosphere using GPS/GNSS-total electron content (TEC) measurements over the epicentral area through spherical harmonic analysis, allowing us to identify pre-earthquake ionospheric irregularities in response to the M5.1 Lorca event. After discerning the seismo-ionospheric precursors from those caused by space weather effects, via wavelet-based spectral analysis, these irregularities were identified about a week before the onset of the mainshock.

The seismo-geodetic technique adopted in this study validates our hypothesis that stimulates the existence of a strong correlation between deep lithospheric deformations and pre-seismic ionospheric anomalies due to moderate magnitudes.

Keywords: Murcia earthquake, Seismo-ionospheric precursors, Spherical harmonic analysis, Wavelet transform, GPS/GNSS-TEC, Lithospheric deformations, Regional F2-ionosphere maps.

How to cite: Tachema, A.: Could the moderate-sized earthquakes trigger pre-seismic ionospheric irregularities? Study of the 2011 Murcia earthquake in the Mediterranean region (SE-Spain)., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1438, https://doi.org/10.5194/egusphere-egu22-1438, 2022.

EGU22-1505 | Presentations | NH4.1 | Highlight

Testing spatial aftershock forecasts accounting for large secondary events during on going earthquake sequences: A case study of the 2017-2019 Kermanshah sequence 

Behnam Maleki Asayesh, Hamid Zafarani, Sebastian Hainzl, and Shubham Sharma

Large earthquakes are always followed by aftershocks sequence that last for months to years. Sometimes, these aftershocks are as destructive as the mainshocks. Hence, accurate and immediate prediction of aftershocks’ spatial and temporal distribution is of great importance for planning search and rescue activities. Despite large uncertainties associated with the calculation of Coulomb failure stress changes (ΔCFS), it is the most commonly used method for predicting spatial distributions of aftershocks. Recent studies showed that classical Coulomb failure stress maps are outperformed by alternative scalar stress quantities, as well as a distance-slip probabilistic model (R) and deep neural networks (DNN). However, these test results were based on the receiver operating characteristic (ROC) metric, which is not well suited for imbalanced data sets such as aftershock distributions. Furthermore, the previous analyses also ignored the potential impact of large secondary earthquakes.

In order to examine the effects of large events in spatial forecasting of aftershocks during a sequence, we use the 2017-2019 seismic sequence in western Iran. This sequence started by Azgeleh M7.3 mainshock (12 November 2017) and followed by Tazehabad M5.9 (August 2018) and Sarpol-e Zahab M6.3 (November 2018) events. Furthermore, 15 aftershocks with magnitude > 5.0 and more than 8000 aftershocks with magnitude > 1 were recorded by Iranian seismological center (IRSC) during this sequence (12.11.2017-04.07.2019). For this complex sequence, we applied the classical Coulomb failure stress, alternative stress scalars, and R forecast models and used the more appropriate MCC-F1 metric to test the prediction accuracy. We observe that the receiver independent stress scalars (maximum shear and von-Mises stress) perform better than the classical CFS values relying on the specification of receiver mechanisms (ΔCFS resolved on master fault, optimally oriented planes, and variable mechanism). However, detailed analysis based on the MCC-F1 metric revealed that the performance depends on the grid size, magnitude cutoff, and test period. Increasing the magnitude cutoff and decreasing the grid size and test period reduces the performance of all methods. Finally, we found that the performance of all methods except ΔCFS resolved on master fault and optimally oriented planes improve when the source information of large aftershocks is additionally considered, with stress-based models outperforming the R model. Our results highlight the importance of accounting for secondary stress changes in improving earthquake forecasts.

How to cite: Maleki Asayesh, B., Zafarani, H., Hainzl, S., and Sharma, S.: Testing spatial aftershock forecasts accounting for large secondary events during on going earthquake sequences: A case study of the 2017-2019 Kermanshah sequence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1505, https://doi.org/10.5194/egusphere-egu22-1505, 2022.

EGU22-2152 | Presentations | NH4.1

Extension of the radon monitoring network in seismic areas in Romania 

Victorin - Emilian Toader, Constantin Ionescu, Iren-Adelina Moldovan, Alexandru Marmureanu, Iosif Lingvay, and Ovidiu Ciogescu

The Romanian National Institute of Earth Physics (NIEP) developed a radon monitoring network mainly for Vrancea seismic are characterized by deep earthquakes (a rectangle zone in longitude/ latitude 25.050/ 46.210 - 27.950/ 44.690, 60 Km – 250 Km). Few stations were relocated after a year of operation following inconclusive results regarding the relationship between radon and seismic activity. To the 5 stations that are in the Vrancea area (Bisoca, Nehoiu, Plostina, Sahastru and Lopatari) we added others positioned in areas with surface seismicity (Panciu, Râmnicu Vâlcea, Surlari and Mangalia). The last two are on the Intramoesica fault, which will be monitored in the future along with the Fagaras - Câmpulung fault. Radon together with CO2 - CO is monitored at Râmnicu Vâlcea within the SPEIGN project near a 40 m deep borehole in which the acceleration in three directions, temperature and humidity are recorded. The same project funded the monitoring of radon, CO2 and CO in Mangalia, which is close to the Shabla seismic zone. The last significant earthquake in the Panciu area with ML = 5.7 R occurred on 22.11.2014. The area is seismically active, which justified the installation of a radon detector next to a radio receiver in the ULF band within the AFROS project. Within the same project, radon monitoring is performed at Surlari, following the activity of the Intramoesica fault. In this location we also measure CO2, CO, air temperature and humidity. The first results show a normal radon activity in Panciu. The measurements in Surlari have higher values than those in Panciu, possibly due to the forest where the sensors are located. A special case is Mangalia where the data indicate more local pollution than the effects of tectonic activity. Radon CO2 and CO values vary widely beyond normal limits. The source of these anomalies may be the local drinking water treatment plant or the nearby shipyard. We also recorded abnormal infrasound values that are monitored in the same location. Determining the source of these anomalies requires at least one more monitoring point.

The purpose of expanding radon monitoring is to analyze the possibility of implementing a seismic event forecast. This can be done in a multidisciplinary approach. For this reason, in addition to radon, determinations of CO2, CO, air ionization, magnetic field, inclinations, telluric currents, solar radiation, VLF - ULF radio waves, temperature in borehole, infrasound and acoustics are made.

This research helps organizations specializing in emergencies not only with short-term earthquake forecasts but also with information on pollution and the effects of climate change that are becoming increasingly evident lately. The methods and solutions are general and can be applied anywhere by customizing them according to the specifics of the monitored area.

The main conclusion is that only a multidisciplinary approach allows the correlation of events and ensures a reliable forecast.

How to cite: Toader, V.-E., Ionescu, C., Moldovan, I.-A., Marmureanu, A., Lingvay, I., and Ciogescu, O.: Extension of the radon monitoring network in seismic areas in Romania, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2152, https://doi.org/10.5194/egusphere-egu22-2152, 2022.

EGU22-2979 | Presentations | NH4.1

TEC variation over Europe during the intense tectonic activity in the area of  Arkalochori of Crete on December of 2021 

Michael E. Contadakis, Demeter N. Arabelos, Christos Pikridas, Styllianos Bitharis, and Emmanuel M. Scordilis

This paper is one of a series of papers dealing with the investigation of  the Lower ionospheric variation on the occasion of an intense tectonic activity.In the present paper, we investigate the TEC variations during the intense seismic activity in Arkalochori of Crete on December 2021 over Europe. The Total Electron Content (TEC) data are been provided by the  Hermes GNSS Network managed by GNSS_QC, AUTH Greece, the HxGN/SmartNet-Greece of Metrica S.A, and the EUREF Network. These data were analysed using Discrete Fourier Analysis in order to investigate the TEC turbulence band content. The results of this investigation indicate that the High-Frequency limit fo of the ionospheric turbulence content, increases as aproaching the occurrence time of the earthquake, pointing to the earthquake epicenter, in accordane to our previous investigations. We conclude that the Lithosphere Atmosphere Ionosphere Coupling, LAIC, mechanism through acoustic or gravity waves could explain this phenomenology.

 

Keywords: Seismicity, Lower Ionosphere, Ionospheric Turbulence, Brownian Walk, Aegean area.

How to cite: Contadakis, M. E., Arabelos, D. N., Pikridas, C., Bitharis, S., and Scordilis, E. M.: TEC variation over Europe during the intense tectonic activity in the area of  Arkalochori of Crete on December of 2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2979, https://doi.org/10.5194/egusphere-egu22-2979, 2022.

EGU22-3138 | Presentations | NH4.1

The Jun 15, 2019, M7.2 Kermadec Islands (New Zealand) earthquake as analyzed from ground to space 

Angelo De Santis, Loredana Perrone, Saioa A. Campuzano, Gianfranco Cianchini, Serena D'Arcangelo, Domenico Di Mauro, Dedalo Marchetti, Adriano Nardi, Martina Orlando, Alessandro Piscini, Dario Sabbagh, and Maurizio Soldani

The M7.2 Kermadec Islands (New Zealand) large earthquake occurred on June 15, 2019 as the result of shallow reverse faulting within the Tonga-Kermadec subduction zone. This work deals with the study of the earthquake-related processes that occurred during the preparation phase of this earthquake. We focused our analyses on seismic (earthquake catalogues), atmospheric (climatological archives) and ionospheric data (from ground to space, mainly satellite) in order to disclose the possible Lithosphere-Atmosphere-Ionosphere Coupling (LAIC). For what concern the ionospheric investigations, we analysed and compared the observations from the Global Navigation Satellite System (GNSS) receiver network and those from satellites in space. Specifically, the data from the European Space Agency (ESA) Swarm satellite constellation and from the China National Space Administration (CNSA, in partnership with Italian Space Agency, ASI) China Seismo-Electromagnetic Satellite (CSES-01) are used in this study. An interesting comparison is made with another subsequent earthquake with comparable magnitude (M7.1) that occurred in Ridgecrest, California (USA) on July 6 of the same year. Both earthquakes showed several multiparametric anomalies that occurred at almost the same times from each earthquake occurrence, evidencing a chain of processes that point to the moment of the corresponding mainshock. In both cases, it is demonstrated that a multiparametric and multilayer analysis is fundamental to better understand the LAIC in complex phenomena such as the earthquakes.

How to cite: De Santis, A., Perrone, L., Campuzano, S. A., Cianchini, G., D'Arcangelo, S., Di Mauro, D., Marchetti, D., Nardi, A., Orlando, M., Piscini, A., Sabbagh, D., and Soldani, M.: The Jun 15, 2019, M7.2 Kermadec Islands (New Zealand) earthquake as analyzed from ground to space, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3138, https://doi.org/10.5194/egusphere-egu22-3138, 2022.

EGU22-3194 | Presentations | NH4.1

Using Operational Earthquake Forecasting Tool for Decision Making: A Synthetic Case Study 

Chen Huang, Håkan Bolin, Vetle Refsum, and Abdelghani Meslem

Operational earthquake forecasting (OEF) provides timely information about the time-dependent earthquake probabilities, which facilitates resilience-oriented decision-making. This study utilized the tools developed within the TURNkey (Towards more Earthquake-Resilient Urban Societies through a Multi-Sensor-Based Information System enabling Earthquake Forecasting, Early Warning and Rapid Response Actions) project funded by the European Union’s Horizon 2020 research and innovation programme to demonstrate the benefits of OEF to the decision support system.  The considered tools are developed based on the state-of-the-art knowledge about seismology and earthquake engineering, involving the Bayesian spatiotemporal epidemic-type aftershock sequence (ETAS) forecasting model, the time-dependent probabilistic seismic hazard assessment, the SELENA (SEimic Loss EstimatioN using a logic tree Approach) risk analysis, cost-benefit analysis and the multi-criteria decision-making methodology. Moreover, the tools are connected to the dense seismograph network developed also within the TURNkey project and, thus, it is capable of real-time updating the forecasting based on the latest earthquake information and observations (e.g., earthquake catalogue). Through a case study in a synthetic city, this study first shows that the changes in the earthquake probabilities can be used as an indicator to inform the authorities or property owners about the heightened seismicity, based on which the decision-maker can, for example, issue a warning of the potential seismic hazard. Moreover, this study illustrates that OEF together with the risk and loss analysis provides the decision-maker with a better picture of the potential seismic impact on the physical vulnerabilities (e.g., damage, economic loss, functionality) and social vulnerabilities (e.g., casualty and required shelters). Finally, given the decision-maker’s preference, this study shows how the hazard and risk results are used to help the decision-maker to identify the optimal action based on cost-beneficial class and the optimality value computed based on the multi-criteria decision-making methodology.

How to cite: Huang, C., Bolin, H., Refsum, V., and Meslem, A.: Using Operational Earthquake Forecasting Tool for Decision Making: A Synthetic Case Study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3194, https://doi.org/10.5194/egusphere-egu22-3194, 2022.

EGU22-3337 | Presentations | NH4.1

Multiparametric and multilayer investigation of global earthquakes in the World by a statistical approach 

Dedalo Marchetti, Kaiguang Zhu, Angelo De Santis, Saioa A. Campuzano, Donghua Zhang, Maurizio Soldani, Ting Wang, Gianfranco Cianchini, Serena D’Arcangelo, Domenico Di Mauro, Alessandro Ippolito, Adriano Nardi, Martina Orlando, Loredana Perrone, Alessandro Piscini, Dario Sabbagh, Xuhui Shen, Zeren Zhima, and Yiqun Zhang and the Zhu Kaiguang's earthquake research group in Jilin University

Earthquake prediction has always been a challenging task, and some researchers have proposed that it is an even impossible goal, concluding earthquakes are unpredictable events. Such a conclusion seems too extreme and in contrast with several pieces of evidence of alterations recorded by several instrumentations from the ground, atmosphere, and more recently by Earth Observation satellite. On the other side, it is clear that searching the “perfect precursor parameter” doesn’t seem to be a good way, since the earthquake process is a complex phenomenon. In fact, a precursor that works for one earthquake does not necessarily work for the next one, even on the same fault. In some cases, another problem for precursors identification is the recurrency time between the earthquakes, which could be very long and, in such cases, we don’t have comparable observations of earthquakes generated by the same fault system.

In past years, we concentrated mainly on two aspects: statistical and single case study; the first one consists of some statistical evidence on ionospheric disturbances possibly related to M5.5+ earthquakes (e.g., presented at EGU2018-9468, and published by De Santis et al., Scientific Report, 2019), furthermore, some clear signals in the atmosphere statistically preceded the occurrence of M8+ events (e.g., presented at EGU2020-19809). On the other side, we also investigated about 20 earthquakes that occurred in the last ten years, some of them by a very detailed and multiparametric investigation, like the M7.5 Indonesia earthquake (presented at EGU2019-8077 and published by Marchetti et al., JAES, 2020), or the Jamaica earthquake investigation presented at the last EGU2021-15456. We found that both approaches are very important. Actually, the statistical studies can provide proofs that at least some of the detected anomalies seem to be related to the earthquakes, while the single case studies permit us to explore deeply the details and the possible connections between the geolayers (lithosphere, atmosphere and ionosphere).

In this presentation, we want to show an update of the statistical study of the atmosphere and ionosphere, together with a new statistical investigation of the seismic acceleration before M7.5+ global earthquakes.

Finally, we demonstrate that it is essential to consider the earthquake not as a point source (that is the basic approximation), but in all its complexity, including its focal mechanism, fault rupture length and even other seismological constraints, in order to try to better understand the preparation phase of the earthquakes, and the reasons for their different behaviour. These studies give hope and fundamental (but not yet sufficient) tools for the possible achievement, one day, of earthquakes prediction capabilities.

How to cite: Marchetti, D., Zhu, K., De Santis, A., Campuzano, S. A., Zhang, D., Soldani, M., Wang, T., Cianchini, G., D’Arcangelo, S., Di Mauro, D., Ippolito, A., Nardi, A., Orlando, M., Perrone, L., Piscini, A., Sabbagh, D., Shen, X., Zhima, Z., and Zhang, Y. and the Zhu Kaiguang's earthquake research group in Jilin University: Multiparametric and multilayer investigation of global earthquakes in the World by a statistical approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3337, https://doi.org/10.5194/egusphere-egu22-3337, 2022.

EGU22-3610 | Presentations | NH4.1

Mechanism of frictional discharge plasma at fault asperities 

Kiriha Tanaka, Jun Muto, and Hiroyuki Nagahama

The mechanism of seismic-electromagnetic phenomena (SEP) encouraged as precursors of earthquake forecast remains unrevealed. The previous studies reported that the surface charges of the frictional and fractured quartz are enough to cause electric discharge due to the dielectric breakdown of air. To verify the discharge occurrence, friction experiments between a diamond pin and quartz disk were performed under nitrogen gas with a CCD camera and UV-VIS photon spectrometer (e.g., Muto et al., 2006). The photon emission was observed at the pin-to-disk gap only during the friction. The photon spectra obtained from a friction experiment (normal stresses of 13-20 MPa, a sliding speed of 1.0×10-2 m/s, and a gas pressure of 2.4×104 Pa) showed that the photon was emitted through the second positive band (SPB) system of neutral nitrogen and the first negative band (FNB) system of ionized nitrogen. The estimated potential difference at the gap gave the breakdown electric field and surface charge density on the frictional surface at a gap, where photon was the most intense. These values were enough to cause dielectric breakdown of air. Therefore, the above results demonstrated that frictional discharge could occur on a fault asperity due to dielectric breakdown of ambient gases by frictional electrification. However, the details of electronic transition during the discharge and its type are unknown.
This study discussed the details of the gas pressure dependency for the photon emission intensity and distribution, and the discharge type using the electronic transition theory. Moreover, we compared the surface charge density estimated from the potential difference with that estimated from electron and hole trapping centre concentrations in the frictional quartz subsurfaces measured by electron spin resonance. From this comparison, we also discussed the possibility for the trapping centres to be the sources of the discharge. We could explain the nitrogen gas pressure dependency for the photon emission intensity and vibration temperature observed during our friction experiments using the electron transition theory. For example, Miura et al. (2004) reported that the gas pressure decreases with increasing vibration temperature of the SPB system and the relative intensity in the SPB system to the FNB system. This result showed that the vibration temperature and the relative intensity were about 2800 K and 0.1 during the friction experiment under a pressure of 2.4×104 Pa. The FNB system is related to negative glow charge and the discharge observed during the friction experiments was spark and/or glow discharges. The gas pressure decreases with increasing vibration temperature and molecule density as shown in several previous studies and decrease with increasing electron temperature and density as explained the electron transition theory. This implies that the increase in the free path of excited molecules as gas pressure decreases can result in the photo emission pattern change. The surface charge density of a frictional quartz surface estimated from the potential difference to be 5.5×10-5 C/m2 included in the range of 6.51×10-6–6.4×10-3 C/m² estimated from the trapping centre concentrations. Hence, the trapping centres can be the sources of the frictional discharge.

How to cite: Tanaka, K., Muto, J., and Nagahama, H.: Mechanism of frictional discharge plasma at fault asperities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3610, https://doi.org/10.5194/egusphere-egu22-3610, 2022.

EGU22-4417 | Presentations | NH4.1

The study of the geomagnetic diurnal variation behavior associated with Mw>4.9 Vrancea (Romania) Earthquakes 

Iren Adelina Moldovan, Victorin Emilian Toader, Marco Carnini, Laura Petrescu, Anica Otilia Placinta, and Bogdan Dumitru Enescu

Diurnal geomagnetic variations are generated in the magnetosphere and last for about 24 hours. These can be seen on the recordings of all magnetic observatories, with amplitudes of several tens of nT, on all magnetic components. The shape and amplitude of diurnal variations strongly depend on the geographical latitude of the observatory. In addition to the dominant external source from the interaction with the magnetosphere, the diurnal geomagnetic variation is also influenced by local phenomena, mainly due to internal electric fields. External influence remains unchanged over distances of hundreds of kilometers, while internal influence may differ over very short distances due to the underground conductivity. The ration of the diurnal geomagnetic variation at two stations should be stable in calm periods and could be destroyed by the phenomena that can occur during the preparation of an earthquake, when at the station inside the seismogenic zone, the underground conductivity would change or additional currents would appear. The cracking process inside the lithosphere before and during earthquakes occurrence, possibly modifies the under- ground electrical structure and emits electro-magnetic waves.

In this paper, we study how the diurnal geomagnetic field variations are related to Mw>4.9 earthquakes occurred in Vrancea, Romania. For this purpose, we use two magnetometers situated at 150 km away from each other, one, the Muntele Rosu (MLR) observatory of NIEP, inside the Vrancea seismic zone and the other, the Surlari (SUA) observatory of IGR and INERMAGNET, outside the preparation area of moderate earthquakes. We have studied the daily ranges of the magnetic diurnal variation, R=DBMLR/DBSUA, during the last 10 years, to identify behavior patterns associated with external or internal conditions, where DB= Bmax-Bmin, during a 24 hours period.

As a first conclusion, we can mention the fact that the only visible disturbances appear before some earthquakes in Vrancea with Mw> 5.5, when we see a differentiation of the two recordings due to possible local internal phenomena at MLR. The differentiation consists in the decrease of the value of the vertical component Bzmax-Bzmin at MLR compared to the USA a few days before the earthquake and the return to the initial value after the earthquake. These studies need to be continued in order to determine if it is a repetitive behavior, or if it is just an isolated phenomenon.

Acknowledgments:

The research was supported by: the NUCLEU program (MULTIRISC) of the Romanian Ministry of Research and Innovation through the projects PN19080102 and by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the projects PN-III-P2-2.1-PED-2019-1693, 480 PED/2020 (PHENOMENAL) and PN-III-P4-ID-PCE- 2020-1361, 119 PCE/2021 (AFROS).

How to cite: Moldovan, I. A., Toader, V. E., Carnini, M., Petrescu, L., Placinta, A. O., and Enescu, B. D.: The study of the geomagnetic diurnal variation behavior associated with Mw>4.9 Vrancea (Romania) Earthquakes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4417, https://doi.org/10.5194/egusphere-egu22-4417, 2022.

EGU22-5375 | Presentations | NH4.1 | Highlight

Non-tectonic-induced stress variations on active faults 

Yiting Cai and Maxime Mouyen

Non-tectonic processes, namely solid earth tides and surface loads such as ocean, atmosphere, and continental water, constantly modify the stress field of the Earth's crust. Such stress perturbations may trigger earthquakes. Several previous studies reported that tides or hydrological loading could modulate seismicity in some areas. We elaborate on this idea and compute the total Coulomb stress change created by solid earth tides and surface loads together on active faults. We expect that computing a total stress budget over all non-tectonic processes would be more relevant than focusing on one of these processes in particular. The Coulomb stress change is a convenient approach to infer if a fault is brought closer to or further from its critical rupture when experiencing a given stress status. It requires to know 1) the fault's rake and geometry and 2) the value of the stress applied on it, which we retrieve from a subduction zone geometry model (Slab2) and a loading-induced Earth's stress database, respectively. In this study, we focus on the Coulomb stress variations on the Kuril-Japan fault over the few last years. By applying this method to the entire Slab2 catalogue and other known active faults, we aim at producing a database of non-tectonic-induced Coulomb failure function variations. Using earthquakes catalogues, this database can then be used to statistically infer the role of the non-tectonic process in earthquakes nucleation.

How to cite: Cai, Y. and Mouyen, M.: Non-tectonic-induced stress variations on active faults, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5375, https://doi.org/10.5194/egusphere-egu22-5375, 2022.

EGU22-6296 | Presentations | NH4.1

Analysis of Swarm Satellite Magnetic Field Data before and after the 2015 Mw7.8 Nepal Earthquake Based on Non-negative Tensor Decomposition 

Mengxuan Fan, Kaiguang Zhu, Angelo De Santis, Dedalo Marchetti, Gianfranco Cianchini, Alessandro Piscini, Loredana Perrone, Xiaodan He, Jiami Wen, Ting Wang, Yiqun Zhang, Wenqi Chen, Hanshuo Zhang, Donghua Zhang, and Yuqi Cheng

In this paper, based on the Non-negative Tensor Decomposition (NTD), we analyzed the Y-component ionospheric magnetic field data as observed by Swarm Alpha and Charlie satellites before, during and after the 2015 (Mw=7.8) Nepal earthquake (April 25, 28.231°N 84.731°E). All the observation data were analyzed, including the data collected under quiet and strong geomagnetic activities. For each investigated satellite track, we can obtain a tensor, which is decomposed in three components. We found that the cumulative number of the inside anomalous tracks for one component of decomposition components (i.e., hs1, whose energy and entropy are more concentrated inside the earthquake-sensitive area, shows an accelerated increase which conforms to a sigmoid trend from 60 to 40 days before the mainshock. After that till the day before the mainshock, the cumulative result displays a weak acceleration trend which obeys a power law trend and resumed linear growth after the earthquake. According to the basis vectors, the frequency of the ionospheric magnetic anomalies is around 0.02 to 0.1 Hz, and by the skin depth formula the estimated depth of the mainshock is similar to the real one.

In addition, we did some confutation analysis to exclude the influence of the geomagnetic activity and solar activity on the abnormal phenomenon of the cumulative result for the hs1 component, according to the ap, Dst and F 10.7 indices. We also analyzed another area at the same magnetic latitude with no seismicity and find that its cumulative result shows a linear increase, which means that the accelerated anomalous phenomenon is not affected by the local time or due by chance.

At lithosphere, the cumulative Benioff Strain S also shows two accelerating increases before the mainshock, which is consistent with the cumulative result of the ionospheric anomalies. At the first acceleration, the seismicity occurred around the boundary of the research area not near the epicenter, and most of the ionospheric anomalies offset from the epicenter. During the second acceleration, some seismicity occurred closer to or on the mainshock fault, and the ionospheric anomalies appeared nearby the two faults around the epicenter, as well.

Furthermore, we considered combining with other studies on Nepal earthquake. Therefore, we noticed that the ionospheric magnetic field anomalies began to accelerate two days after the subsurface microwave radiation anomaly detected by Feng Jing et al. (2019). The spatial distribution of some ionospheric anomalies is consistent with the atmospheric Outgoing Longwave Radiation (OLR) anomalies found by Ouzounov et al. (2021). The latter occurred around two faults near the epicenter and the atmospheric anomalies occurred earlier than the ionospheric anomalies.

Considering the occurrence time of the anomalies in different layers, the abnormal phenomenon appeared in lithosphere, then transferred to the atmosphere, and at last occurred in the ionosphere. These results can be described by the Lithosphere Atmosphere Ionosphere Coupling model.

All these analyses indicate that by means of the NTD method, we can use all observed multi-channel data to analyze the Nepal earthquake and obtain a component whose anomalies are likely to be related to the earthquake. 

How to cite: Fan, M., Zhu, K., De Santis, A., Marchetti, D., Cianchini, G., Piscini, A., Perrone, L., He, X., Wen, J., Wang, T., Zhang, Y., Chen, W., Zhang, H., Zhang, D., and Cheng, Y.: Analysis of Swarm Satellite Magnetic Field Data before and after the 2015 Mw7.8 Nepal Earthquake Based on Non-negative Tensor Decomposition, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6296, https://doi.org/10.5194/egusphere-egu22-6296, 2022.

A very strong earthquake of magnitude Mw8.2 struck the coastal zone of Alaska (USA), on July 29, 2021. This earthquake was felt around the Gulf of Alaska, on a wide offshore area belonging to USA and Canada. In order to identify an anomalous geomagnetic signal before the onset of this earthquake, we retrospectively analyzed the data collected on the interval June 17 - July 31, 2021, via internet (www.intermagnet.org), at the two geomagnetic observatories, College (CMO) - Alaska and Newport (NEW)-USA, by using the polarization parameter (BPOL) and the strain effect–related to geomagnetic signal identification. Thus, for the both observation sites (CMO and NEW), the daily mean distribution of the BPOL and its standard deviation (STDEV) are carried out using an FFT band-pass filtering in the ULF range (0.001-0.0083Hz). Further on, a statistical analysis based on a standardized random variable equation was applied to emphasize the following: a) the anomalous signature related to Mw8.2 earthquake on the both time series BPOL*(CMO) and BPOL*(NEW); b) the differentiation of the transient local anomalies associated with Mw8.2 earthquake from the internal and external parts of the geomagnetic field, taking the NEW observatory as reference. Consequently, on the BPOL*(NEW-CMO) time series, carried out on the interval 07-31 July, 2021, a very clear anomaly of maximum, greater than 1.2 STDEV, was detected on July 22, with 7 days before the onset of Mw8.2 earthquake.

How to cite: Stanica, D. A.: ANOMALOUS GEOMAGNETIC SIGNAL EMPHASISED BEFORE THE Mw8.2 ALASKA EARTHQUAKE OCCURRED ON JULY 29, 2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7107, https://doi.org/10.5194/egusphere-egu22-7107, 2022.

Among the different parameters, the fluctuations of Earth's thermally emitted radiation, as measured by sensors on board of satellite systems operating in the Thermal Infra-Red (TIR) spectral range and Earth's surface deformation as recorded by satellite radar interferometry, have been proposed since long time as potential earthquake precursors. Nevertheless, the spatiotemporal relationship between the two different phenomena has been ignored till now.

On September 27, 2021, a strong earthquake of magnitude M5.8 occurred in Crete, near the village of Arkalochori at 06:17:21 UTC, as the result of shallow normal faulting. The epicenter of the seismic event was located at latitude 35.15 N and longitude 25.27 E, while the focal depth was 10 km. Since the beginning of June, almost 4 months earlier, more than 400 foreshocks ranging in magnitude from M0.5 to M4.8 were recorded in the broader area while the strongest aftershock (M 5.3) occurred on September 28th at 04:48:09 UTC.

10 years of MODIS Land Surface Temperature and Emissivity Daily L3 Global 1km satellite records were incorporated to the RETIRA index computation in order to detect and map probable pre-seismic and co-seismic thermal anomalies in the area of tectonic activation. At the same time, SAR images of the Sentinel-1 Copernicus satellite in both geometries of acquisition were used to create the differential interferograms and the displacement maps according to the Interferometric Synthetic Aperture Radar (InSAR) technique. Then, the two kinds of datasets (i.e thermal anomaly maps and crustal deformation maps) were introduced into a Geographic Information System environment along with geological formations, active faults, and earthquakes’ epicenters. By overlapping all the aforementioned data, their spatiotemporal relation is explored.

How to cite: Peleli, S., Kouli, M., and Vallianatos, F.: Investigating the spatiotemporal relationship between thermal anomalies and surface deformation; The Arkalochori Earthquake sequence of September 2021, Crete, Greece., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7148, https://doi.org/10.5194/egusphere-egu22-7148, 2022.

EGU22-7309 | Presentations | NH4.1

Wave-like structures prior to very recent southeastern Mediterranean earthquakes as recorded by a VLF/LF radio receiver in Athens (Greece) 

Dimitrios Z. Politis, Stelios M. Potirakis, Sagardweep Biswas, Sudipta Sasmal, and Masashi Hayakawa

A VLF (10 – 47.5 kHz) radio receiver with call sign UWA has recently been installed at the University of West Attica in Athens (Greece) and is continuously monitoring the lower ionosphere by means of the receptions from many transmitters, in order to identify any possible pre-seismic signatures or other precursors associated with extreme geophysical and space phenomena. In this study, we examine the case of three very recent strong mainshocks with magnitude Mw ≥ 5.5 that happened in September and October of 2021 in the southeastern Mediterranean. The VLF data used in this work correspond to the recordings of one specific transmitter with the call sign “ISR” which is located in Negev (Israel). The borders of the 5th Fresnel zone of the corresponding sub-ionospheric propagation path (ISR-UWA) are close in distance with the epicenters of the two earthquakes (EQ), while the third one is located within the 5th Fresnel zone of the specific path. In this work, we computed the morlet wavelet scalogram of the nighttime amplitude signal in order to check for any embedded wave-like structures, which would indicate the existence of Atmospheric Gravity Waves (AGW) before each one of the examined EQs. In our investigation, we also checked for any other global extreme phenomena, such as geomagnetic storms and solar flares, which may have occurred close in time with the examined EQs and could have a contaminating impact on the obtained results. Our results revealed wave-like structures in the amplitude of the signal a few days before the occurrence of these three EQs.

How to cite: Politis, D. Z., Potirakis, S. M., Biswas, S., Sasmal, S., and Hayakawa, M.: Wave-like structures prior to very recent southeastern Mediterranean earthquakes as recorded by a VLF/LF radio receiver in Athens (Greece), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7309, https://doi.org/10.5194/egusphere-egu22-7309, 2022.

EGU22-8280 | Presentations | NH4.1

Primary-level Site Effect Zoning in Developing Urban Areas Through the Geomorphic Interpretation of Landforms 

Zahra Pak Tarmani, Zohre Masoumi, and Esmaeil Shabanian

The site effect has a great impact on seismic hazard assessment in urban and industrial regions.
For instance, a layer of soft soil with a thickness of several meters amplifies seismic waves from
1.5 to 6 times relative to the underlying bedrock. Therefore, investigating the main characteristics
of Quaternary deposits such as the granulometry and mechanical layering is crucial in site effect
studies. These parameters are directly related to the local geologic/geomorphic setting and genesis
processes of the Quaternary deposits. Nevertheless, large cities in development countries have 
rapidly been enlarged covering Quaternary terrains before being evaluated for the site effect. This
rather rapid growth in urbanization interested us to take advantage of ancient aerial photographs
reprocessed with new image processing techniques in order to provide 3D terrain models from
such kind of areas before the recent urbanization. It helped us in the geomorphic terrain
classification and the detection of regions with different site effects originally caused by the
geomorphic setting and genesis of the Quaternary terrains. For example, site effect in a river flood
plain will be different from surrounding areas underlined by alluvial conglomerates or bedrock.
The main target of this study is investigating the primary-level site effect in Urmia city using 3D
geomorphic models derived from ancient aerial photos taken in 1955. Urmia in NW Iran is one of
the populated high-risk areas according to the standard regulations of earthquake in Iran, and
covers a wide region from mountainous areas to the ancient coast of Lake Urmia, with the Shahr
Chai River as the axial drainage. We created the 3D terrain model through the Structure from
Motion (SfM) algorithm. We have provided a detailed geomorphic map of Plio-Quaternary terrains
using the 3D Anaglyph view, Digital Elevation Model (DEM), and orthophoto-mosaic of the
region. It was complemented by granulometry and mechanical layering information from the
available geotechnical boreholes to reconstruct a shallow soil structure model for the area. It
allowed us establishing a primary-level site effect zoning for Urmia. Our results reveal the
presence of five distinct geomorphic zones, with different genesis processes and soil characteristics
from piedmont to coastal zones, which represent different soil structures and probable site effects.
This zoning paves the way for performing complementary site effect investigations with lower
time consummation and cost. The developed method, proposes a sophisticated tool to evaluate
primary site effect in areas covered by urbanization subjected to future natural hazards like
earthquake, landslide and flood before designing geophysical networks for the measurement of
quantitative site effect parameters such as Nakamura microtremor H/V and Multichannel Analysis
of Surface Waves.
Key words: Earthquake hazard, Site effect, Image Processing, Aerial photos, Quaternary geology, Structure from
Motion 

How to cite: Pak Tarmani, Z., Masoumi, Z., and Shabanian, E.: Primary-level Site Effect Zoning in Developing Urban Areas Through the Geomorphic Interpretation of Landforms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8280, https://doi.org/10.5194/egusphere-egu22-8280, 2022.

EGU22-8420 | Presentations | NH4.1

Ionospheric perturbations related to seismicity and volcanic eruptions inferred from VLF/LF electric field measurements 

Hans U. Eichelberger, Konrad Schwingenschuh, Mohammed Y. Boudjada, Bruno P. Besser, Daniel Wolbang, Maria Solovieva, Pier F. Biagi, Manfred Stachel, Özer Aydogar, Christoph Schirninger, Cosima Muck, Claudia Grill, and Irmgard Jernej

In this study we investigate electric field perturbations from sub-ionospheric VLF/LF paths which cross seismic and volcanic active areas. We use waveguide cavity radio links from the transmitters TBB (26.70 kHz, Bafa, Turkey) and ITS (45.90 kHz, Niscemi, Sicily, Italy) to the seismo-electromagnetic receiver facility GRZ (Graz, Austria). The continuous real-time amplitude and phase measurements have a temporal resolution of 1 sec, events are analyzed for the period 2020-2021. Of high interest in this time span are paroxysms of the stratovolcano Mt. Etna, Sicily, Italy. We show electric field amplitude variations which could be related to atmospheric waves, occurred at the active crater and propagated up to the lower ionosphere. This corresponds to vertical coupling processes from the ground to the E-region, the upper waveguide boundary during night-time. Ionospheric variations possibly related to earthquakes are discussed for events along the TBB-GRZ path, assumed is an area given by the so-called effective precursor manifestation zone [1,2]. The findings indicate statistical relations between electric field amplitude variations of the ITS-GRZ path in the VLF/LF sub-ionospheric waveguide and high volcanic activity of Etna. For earthquakes multi-parametric observations shall be taken into account to diagnose physical processes related to the events. In summary, VLF/LF investigations in a network together with automated data processing can be an essential component of natural hazards characterization.

References:

[1] Dobrovolsky, I.P., Zubkov, S.I., and Miachkin, V.I., Estimation of the size of earthquake preparation zones, PAGEOPH 117, 1025–1044, 1979. https://doi.org/10.1007/BF00876083

[2] Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A., and Sornette, D., An observational test of the critical earthquake concept, JGR Solid Earth, 103, B10, 24359-24372, 1998. https://doi.org/10.1029/98JB00792

How to cite: Eichelberger, H. U., Schwingenschuh, K., Boudjada, M. Y., Besser, B. P., Wolbang, D., Solovieva, M., Biagi, P. F., Stachel, M., Aydogar, Ö., Schirninger, C., Muck, C., Grill, C., and Jernej, I.: Ionospheric perturbations related to seismicity and volcanic eruptions inferred from VLF/LF electric field measurements, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8420, https://doi.org/10.5194/egusphere-egu22-8420, 2022.

EGU22-8426 | Presentations | NH4.1 | Highlight

Earthquake nowcasting: Retrospective testing in Greece 2019 - 2021 

Gerasimos Chouliaras, Efthimios S. Skordas, and Nikolaos Sarlis

Earthquake nowcasting [1] (EN) is a modern method to estimate seismic risk by evaluating the progress of the earthquake cycle in fault systems [2]. EN employs natural time [3], which uniquely estimates seismic risk by means of the earthquake potential score (EPS) [1,4] and has found many useful applications both regionally and globally [1, 2, 4-10]. Among these applications, here we focus on those in Greece since 2019 [2], by using the earthquake catalogue of the Institute of Geodynamics of the National Observatory of Athens[11–13] (NOA) for the estimation of the EPS in various locations: For example, the ML(NOA)=6.0 off-shore Southern Crete earthquake on 2 May 2020, the ML(NOA)=6.7 Samos earthquake on 30 October 2020, the ML(NOA)=6.0 Tyrnavos earthquake on 3 March 2021, the ML(NOA)=5.8 Arkalohorion Crete earthquake on 27 September 2021, the ML(NOA)=6.3 Sitia Crete earthquake on 12 October 2021. The results are promising and reveal that earthquake nowcast scores provide useful information on impending seismicity.

[1] J.B. Rundle, D.L. Turcotte, A. Donnellan, L. Grant Ludwig, M. Luginbuhl, G. Gong, Earth and Space Science 3 (2016) 480–486. dx.doi.org/10.1002/2016EA000185

[2] J.B. Rundle, A. Donnellan, G. Fox, J.P. Crutchfield, Surveys in Geophysics (2021). dx.doi.org/10.1007/s10712-021-09655-3

[3] P.A. Varotsos, N.V. Sarlis, E.S. Skordas, Phys. Rev. E 66 (2002) 011902. dx.doi.org/10.1103/physreve.66.011902

[4] S. Pasari, Pure Appl. Geophys. 176 (2019) 1417–1432. dx.doi.org/10.1007/s00024-018-2037-0

[5] M. Luginbuhl, J.B. Rundle, D.L. Turcotte, Pure and Applied Geophysics 175 (2018) 661–670. dx.doi.org/10.1007/s00024-018-1778-0

[6] M. Luginbuhl, J.B. Rundle, D.L. Turcotte, Geophys. J. Int. 215 (2018) 753–759. dx.doi.org/10.1093/gji/ggy315

[7] N.V. Sarlis, E.S. Skordas, Entropy 20 (2018) 882. dx.doi.org/10.3390/e20110882

[8] S. Pasari, Y. Sharma, Seismological Research Letters 91 (6) (2020) 3358–3369. dx.doi.org/10.1785/0220200104

[9] J. Perez-Oregon, F. Angulo-Brown, N.V. Sarlis, Entropy 22 (11) (2020) 1228. dx.doi.org/10.3390/e22111228

[10] P.K. Varotsos, J. Perez-Oregon, E.S. Skordas, N.V. Sarlis, Applied Sciences 11 (21) (2021) 10093. dx.doi.org/10.3390/app112110093

[11] G. Chouliaras, Natural Hazards and Earth System Sciences 9 (3) (2009) 905–912. dx.doi.org/10.5194/ nhess-9-905-2009

[12] G. Chouliaras, N.S. Melis, G. Drakatos, K. Makropoulos, Advances in Geosciences 36 (2013) 7–9. dx.doi.org/10.5194/adgeo-36-7-2013

[13] A. Mignan, G. Chouliaras, Seismological Research Letters 85 (3) (2014) 657–667. dx.doi.org/10.1785/0220130209

How to cite: Chouliaras, G., Skordas, E. S., and Sarlis, N.: Earthquake nowcasting: Retrospective testing in Greece 2019 - 2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8426, https://doi.org/10.5194/egusphere-egu22-8426, 2022.

The visibility graph method has allowed to identify statistical properties of earthquake magnitude time series. So that, such statistical features in the time series have helped to classify the earthquakes sequences in different categories according with their tectonical sources related with their dynamical seismicity. The Tehuantepec Isthmus subduction zone, México, has showed different dynamical behavior before and after the M8.2 occurred on September 07, 2017. This behavior is associated with the temporal correlations observed in the magnitude sequences. With the aim to characterize these correlations we use the visibility graph method which has showed great potential to get the dynamical properties of studied system from the statistical properties in the network graph. In this study we investigate four periods: the first, between 2005 and 2012, the second (before the M8.2 EQ) from 2012 to 2017, the third from September 2017 to March 2018 corresponding to aftershocks period, and the fourth from April to December 2021, in order to find type of connectivity corresponding to each one, we have computed the distribution function P(k) of the connectivity degree k. Our results show the connectivity increases till the earthquake and decrease in the aftershocks period.

How to cite: Ramírez-Rojas, A. and Flores-Márquez, E. L.: Visibility graph analysis to identify correlations in the magnitude earthquake time series monitored in the Tehuantepec Isthmus subduction zone, México., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8718, https://doi.org/10.5194/egusphere-egu22-8718, 2022.

EGU22-8924 | Presentations | NH4.1 | Highlight

The Cascading Foreshock Sequence of the Ms 6.4 Yangbi Earthquake in Yunnan, China 

Gaohua Zhu, Hongfeng Yang, Yen Joe Tan, Mingpei Jin, Xiaobin Li, and Wei Yang

Foreshocks may provide valuable information on the nucleation process of large earthquakes. The 2021 Ms 6.4 Yangbi, Yunnan, China, earthquake was preceded by abundant foreshocks in the ~75 hours leading up to the mainshock. To understand the space-time evolution of the foreshock sequence and its relationship to the mainshock nucleation, we built a high‐precision earthquake catalog using a machine-learning phase picker—EQtransformer and the template matching method. The source parameters of 17 large foreshocks and the mainshock were derived to analyze their interaction. Observed “back-and-forth” spatial patterns of seismicity and intermittent episodes of foreshocks without an accelerating pattern do not favor hypotheses that the foreshocks were a manifestation of a slow slip or fluid front propagating along the mainshock’s rupture plane. The ruptured patches of most large foreshocks were adjacent to one another with little overlap, and the mainshock eventually initiated near the edge of the foreshocks’ ruptured area where there had been a local increase in shear stress. These observations are consistent with a triggered cascade of stress transfer, where previous foreshocks load adjacent fault patches to rupture as additional foreshocks, and eventually the mainshock.

How to cite: Zhu, G., Yang, H., Tan, Y. J., Jin, M., Li, X., and Yang, W.: The Cascading Foreshock Sequence of the Ms 6.4 Yangbi Earthquake in Yunnan, China, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8924, https://doi.org/10.5194/egusphere-egu22-8924, 2022.

EGU22-9690 | Presentations | NH4.1 | Highlight

Lesson learnt after long-term (>10 years) correlation analyses between satellite TIR anomalies and earthquakes occurrence performed over Greece, Italy, Japan and Turkey 

Valeria Satriano, Roberto Colonna, Angelo Corrado, Alexander Eleftheriou, Carolina Filizzola, Nicola Genzano, Hattori Katsumi, Mariano Lisi, Nicola Pergola, Vallianatos Filippos, and Valerio Tramutoli

In the recent years, in order to evaluate the possible spatial-temporal correlation among anomalies in Earth’s thermally emitted InfraRed radiation and earthquakes occurrence, several long-term studies have been performed. Different seismically active areas around the world have been this way investigated by using TIR sensors on board geostationary (e.g. Eleftheriou et al. 2016, Genzano et al., 2020, Genzano et al., 2021, Filizzola et al., 2022) and polar (e.g. Zhang and Meng, 2019) satellites.  Since the study of Filizzola et al. (2004) the better S/N ratio achievable by the geostationary sensors (compared with the polar ones) made this kind of sensors the first choice for this kind of long-term analyses.

In this paper the lesson learnt after 20 years of satellite TIR analyses are critically analyzed in the perspective of the possible inclusion of such anomalies among the parameters usefully contributing to the construction of a multi-parametric system for a time-Dependent Assessment of Seismic Hazard.

The more recent results achieved by applying the RST (Tramutoli et al., 2005, Tramutoli 2007) approach to long-term (>10 years) TIR satellite data collected by the geostationary sensors SEVIRI (on board MSG) - over Greece (Elefteriou et al., 2016), Italy (Genzano et al, 2020) and Turkey (Filizzola et al., 2022) – and  by JAMI and IMAGER (on board MTSAT satellites) over Japan (Genzano et al., 2021) will be also presented and discussed.

References

Eleftheriou, A., C. Filizzola, N. Genzano, T. Lacava, M. Lisi, R. Paciello, N. Pergola, F. Vallianatos, and V. Tramutoli (2016), Long-Term RST Analysis of Anomalous TIR Sequences in Relation with Earthquakes Occurred in Greece in the Period 2004–2013, PAGEOPGH, 173(1), 285–303, doi:10.1007/s00024-015-1116-8.

Filizzola, C., N. Pergola, C. Pietrapertosa, V. Tramutoli (2004), Robust satellite techniques for seismically active areas moni-toring: a sensitivity analysis on September 7, 1999 Athens’s earthquake. Phys. Chem. Earth, 29, 517–527. 10.1016/j.pce.2003.11.019

Filizzola C., A. Corrado, N. Genzano, M. Lisi, N. Pergola, R. Colonna and V. Tramutoli (2022), RST Analysis of Anomalous TIR Sequences in relation with earthquakes occurred in Turkey in the period 2004–2015, Remote Sensing, (accepted).

Genzano, N., C. Filizzola, M. Lisi, N. Pergola, and V. Tramutoli (2020), Toward the development of a multi parametric system for a short-term assessment of the seismic hazard in Italy, Ann. Geophys, 63(5) doi:10.4401/ag-8227.

Genzano, N., C. Filizzola, K. Hattori, N. Pergola, and V. Tramutoli (2021), Statistical correlation analysis between thermal infrared anomalies observed from MTSATs and large earthquakes occurred in Japan (2005–2015). JGR: Solid Earth, 126, e2020JB020108, https://doi.org/10.1029/2020JB020108

Tramutoli, V. (2007), Robust Satellite Techniques (RST) for Natural and Environmental Hazards Monitoring and Mitigation: Theory and Applications, in 2007 International Workshop on the Analysis of Multi-temporal Remote Sensing Images, pp. 1–6, IEEE. doi: 10.1109/MULTITEMP.2007.4293057

Tramutoli, V., V. Cuomo, C. Filizzola, N. Pergola, C. Pietrapertosa (2005), Assessing the potential of thermal infrared satellite surveys for monitoring seismically active areas: The case of Kocaeli (İzmit) earthquake, August 17, 1999. RSE, 96, 409–426. https://doi.org/10.1016/j.rse.2005.04.006

Zhang, Y. and Meng, Q. (2019), A statistical analysis of TIR anomalies extracted by RSTs in relation to an earthquake in the Sichuan area using MODIS LST data, NHESS, 19, 535–549, https://doi.org/10.5194/nhess-19-535-2019, 2019

How to cite: Satriano, V., Colonna, R., Corrado, A., Eleftheriou, A., Filizzola, C., Genzano, N., Katsumi, H., Lisi, M., Pergola, N., Filippos, V., and Tramutoli, V.: Lesson learnt after long-term (>10 years) correlation analyses between satellite TIR anomalies and earthquakes occurrence performed over Greece, Italy, Japan and Turkey, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9690, https://doi.org/10.5194/egusphere-egu22-9690, 2022.

EGU22-10161 | Presentations | NH4.1 | Highlight

Analysis of VLF and LF signal fluctuations recorded by Graz facility prior to earthquakes occurrences 

Mohammed Y. Boudjada, Pier Francesco Biagi, Hans Ulrich Eichelberger, Patrick H.M. Galopeau, Konrad Schwingenschuh, Maria Solovieva, Helmut Lammer, Wolfgang Voller, and Masashi Hayakawa

We report in our study on earthquakes that occurred in Croatia and Slovenia in the period from 1 Jan. 2020 to 31 Dec. 2021. Those seismic events happened in a localized region confined between 13.46°E and 17.46°E in longitude and 45.03°N and 49.03°N in latitude. Maximum magnitudes Mw6.4 and Mw5.4 occurred, respectively, on 29 Dec. 2020, at 11:19 UT, and 22 March 2020, at 05:24 UT. We use two-radio system, INFREP (Biagi et al., 2019) and UltraMSK (Schwingenschuh et al., 2011) to investigate the reception conditions of LF-VLF transmitter signals. The selected earthquakes occurred at distances less than 300km from the Graz station (47.03°N, 15.46°E) in Austria. First, we emphasize on the time evolutions of earthquakes that occurred along a same meridian, i.e. at a geographical longitude of 16°E. Second, we study the daily VLF-LF transmitter signals that exhibit a minimum around local sunrises and sunsets. This daily variations are specifically considered two/three weeks before the occurrence of the two intense events with magnitudes Mw6.4 and Mw5.4. We discuss the unusual terminator time motions of VLF-LF signals linked to earthquakes occurrences, and their appearances at sunrise- or sunset-times. Such observational features are interpreted as disturbances of the transmitter signal propagations in the ionospheric D- and E-layers above the earthquakes preparation zone (Hayakawa, 2015).

 

References:

Biagi et al., The INFREP Network: Present Situation and Recent Results, Open J. Earth. Research, 8, 2019.

Hayakawa, Earthquake Prediction with Radio Techniques, John Wiley and Sons, Singapore, 2015.

Schwingenschuh et al., The Graz seismo-electromagnetic VLF facility, Nat. Hazards Earth Syst. Sci., 11, 2011

How to cite: Boudjada, M. Y., Biagi, P. F., Eichelberger, H. U., Galopeau, P. H. M., Schwingenschuh, K., Solovieva, M., Lammer, H., Voller, W., and Hayakawa, M.: Analysis of VLF and LF signal fluctuations recorded by Graz facility prior to earthquakes occurrences, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10161, https://doi.org/10.5194/egusphere-egu22-10161, 2022.

EGU22-10209 | Presentations | NH4.1

Enhancing Data Sets From Rudna Deep Copper Mine, SW Poland: Implications on Detailed Structural Resolution and Short-Term Hazard Assessment 

Monika Sobiesiak, Konstantinos Leptokaropoulos, Monika Staszek, Natalia Poiata, Pascal Bernard, and Lukasz Rudzinski

Applying the software BackTrackBB (Poiata et al., 2016) for automated detection and location of seismic events to data sets from Rudna Deep Copper Mine, SW Poland, lead to an enhancement of existing routine catalogs by about a factor of 10.000 in number of events. Following our hypothesis that all types of seismic events contribute to seismic hazard in a mine, we included all events from major mine collapses (M>3), recorded blasting works and detonations, to machinery noise. These enhanced data sets enabled a detailed spatio-temporal distribution of seismicity in the mine and a short-term hazard assessment on a daily basis.

In this study, we focus on the data from two days with major mine collapses: the 2016-11-29 Mw=3.4, and the 2018-09-15 Mw=3.7 events. The spatio-temporal distribution of seismicity of both days deciphered detailed horizontal and vertical structures and revealed the increase of seismic activity after the daily blasting work. The daily histograms exhibit similar patterns, suggesting the dominant influence of explosions on the overall seismicity in the mine. Using the enhanced data sets for short-term hazard assessment, we observed gaps in the activity rates before the main shocks. They were followed by sudden increase of seismicity, a simultaneous drop in seismic b-value, and an increase in exceedance probability for the assumed largest magnitude events. This demonstrates the usefulness of enhanced data sets from surface networks for revealing precursory phenomena before destructive mine collapses and suggests a testing strategy for early warning procedures.

How to cite: Sobiesiak, M., Leptokaropoulos, K., Staszek, M., Poiata, N., Bernard, P., and Rudzinski, L.: Enhancing Data Sets From Rudna Deep Copper Mine, SW Poland: Implications on Detailed Structural Resolution and Short-Term Hazard Assessment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10209, https://doi.org/10.5194/egusphere-egu22-10209, 2022.

EGU22-10222 | Presentations | NH4.1

Optimized setup and long-term validation of anomaly detection methods for earthquake-related ionospheric-TEC (Total Electron Content) parameter over Italy and Mediterranean area 

Roberto Colonna, Carolina Filizzola, Nicola Genzano, Mariano Lisi, Nicola Pergola, and Valerio Tramutoli

Near the end of the last century and the beginning of the new, different types of geophysical parameters (components of the electromagnetic field in several frequency bands, thermal anomalies, radon exhalation from the ground, ionospheric parameters and more) have been proposed as indicators of variability potentially related to the earthquakes occurrence. During the last decade, thanks to the availability of historical satellite observations which has begun to be significantly large and thanks to the exponential growth of artificial intelligence techniques, many advances have been made on the study of the seismic-related anomalies detection observed from space.

In this work, the variations in Total Electron Content (TEC) parameter are investigated as indicator of the ionospheric status potentially affected by earthquake related phenomena. In-depth and systematic analysis of multi-year historical data series plays a key role in distinguishing between anomalous TEC variations and TEC changes associated with normal ionospheric behavior or non-terrestrial forcing phenomena (mainly dominated by solar cycles and activity).

In order to detect the differences between the two types of variation, we performed an optimal setting of the methodological inputs for the detection of seismically related anomalies in ionospheric-TEC using machine learning techniques and validating the findings on multiple long-term historical series (mostly nearly 20-year). The setting was optimized using techniques capable of combining multi-year time series of TEC satellite data and multi-year time series of seismic catalogues, simulating their behaviors in tens of thousands of possible combinations and classifying them according to criteria established a priori. Input setup and validation were done by investigating possible links between TEC anomalies and earthquake occurring over Italy and Mediterranean area. We will show and comment the results of both, optimal input setting and statistical correlation analyses consequently performed, and we will discuss the potential impact of these on future developments in this field.

How to cite: Colonna, R., Filizzola, C., Genzano, N., Lisi, M., Pergola, N., and Tramutoli, V.: Optimized setup and long-term validation of anomaly detection methods for earthquake-related ionospheric-TEC (Total Electron Content) parameter over Italy and Mediterranean area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10222, https://doi.org/10.5194/egusphere-egu22-10222, 2022.

EGU22-10371 | Presentations | NH4.1

Utilizing machine learning techniques along with GPS ionospheric TEC maps for potentially predicting earthquake events 

Yuval Reuveni, Sead Asaly, Nimrod Inbar, and Leead Gottlieb

The scientific use of ground and space-based remote sensing technology is inherently vital for studying different lithospheric-tropospheric-ionospheric coupling mechanisms, which are imperative for understanding geodynamic processes. Current remote sensing technologies operating at a wide range of frequencies, using either sound or electromagnetic emitted waves, have become a valuable tool for detecting and measuring signatures presumably associated with earthquake events. Over the past two decades, numerous studies have been presenting promising results related to natural hazards mitigation, especially for earthquake precursors, while other studies have been refuting them. While highly impacting for geodynamic processes the controversy around precursors that may precede earthquakes yet remains significant. Thus, predicting where and when natural hazard event such as earthquake is likely to occur in a specific region of interest still remains a key challenging task in geo-sciences related research. Recently, it has been discovered that natural hazard signatures associated with strong earthquakes appear not only in the lithosphere, but also in the troposphere and ionosphere. Both ground and space-based remote sensing techniques can be used to detect early warning signals from places where stresses build up deep in the Earth’s crust and may lead to a catastrophic earthquake. Here, we propose to implement a machine learning Support Vector Machine (SVM) technique, applied with GPS ionospheric Total Electron Content (TEC) pre-processed time series estimations, extracted from global ionospheric TEC maps, to evaluate any potential precursory caused by the earthquake and is manifested as ionospheric TEC anomaly. Each TEC time series data was geographically extracted around the earthquake epicenter and calculated by weighted average of the four closest points to evaluate any potential influence caused by the earthquake. After filtering and screening our data from any solar or geomagnetic influence at different time scales, our results indicate that with large earthquakes (> 6 [Mw]), there is a potentially high probability of gaining true negative prediction with accuracy of 85.7% as well as true positive prediction accuracy of 80%. Our suggested method has been also tested with different skill scores such as Accuracy (0.8285), precision (0.85), recall (0.8), Heidke Skill Score (0.657) and Tue Skill Statistics (0.657).

How to cite: Reuveni, Y., Asaly, S., Inbar, N., and Gottlieb, L.: Utilizing machine learning techniques along with GPS ionospheric TEC maps for potentially predicting earthquake events, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10371, https://doi.org/10.5194/egusphere-egu22-10371, 2022.

EGU22-10488 | Presentations | NH4.1

Results of the analysis of VLF and ULF perturbations and modeling atmosphere-ionosphere coupling 

Yuriy Rapoport, Volodymyr Reshetnyk, Asen Grytsai, Alex Liashchuk, Alla Fedorenko, Masashi Hayakawa, Volodymyr Grimalsky, and Sergei Petrishchevskii

The work continues one presented by us in 2021, which included the identification of three groups of periods in the VLF amplitude variations in the waveguide Earth-Ionosphere (WGEI) according to data of Japan receivers, obtained in 2014–2017. Periods of 5–10 minutes correspond to the fundamental mode of acoustic-gravity waves (AGW) near the Brunt–Väisälä period and were firstly revealed in VLF signals. Apart from these values, periods of 2–3 hours and possibly 1 week were also detected; the weekly periodicity is caused by anthropogenic influence on the VLF data. The problem with penetration of the ULF electric field to the ionosphere is investigated both within the dynamic simulation of the Maxwell equations and within the quasi-electrostatic approach. It is demonstrated that in the case of open field lines the results of dynamic simulations differ essentially from the quasi-electrostatic approach, which is not valid there. In the case of closed field lines, the simulation results are practically the same for both approaches and correspond to the data of measurements of plasma perturbations in the ionosphere. It is shown that the diurnal cycle is most clearly visible in the variations of the VLF amplitudes. Disturbances from various phenomena also appear in the VLF data series. One of the strongest geomagnetic storms during the analyzed time range was the event of St. Patrick's Day (March 17, 2015), which is not reflected in Japanese data because this event occurred at night for East Asia. The use of information entropy in the VLF signal processing was tested with the determination of the main features of information entropy. Variations in information entropy at different stations are discussed in detail. It has been found that information entropy shows maxima near sunrise and sunset. The location of these peaks relative to the moments of sunrise and sunset changes with the seasons that is probably connected with the solar terminator passage at the heights of the VLF signal propagation. A study of 109 earthquakes during 2014-2017 did not show a clear dependence of information entropy when using the superposed epoch analysis, although a slight decrease in information entropy was observed before a part of the earthquakes. The effect of solar flares on information entropy has been established, but this issue needs further study. We have developed a model describing the penetration into the ionosphere of a nonlinear AGW packet excited by a ground source. Complex modulation of the initial AGW includes acoustic waves with closed frequencies and random phases. The model is important for the interpretation of atmosphere–ionosphere coupling along with seismoionospheric one. We are working on the application of this model to the spectrum of the VLF waves in the WGEI and unified models of the atmosphere–ionosphere coupling due to AGW and electromagnetic field excited by the same source in the lower atmosphere. This model would be important for the understanding seismogenic and tropical cyclone influence on the ionosphere.

How to cite: Rapoport, Y., Reshetnyk, V., Grytsai, A., Liashchuk, A., Fedorenko, A., Hayakawa, M., Grimalsky, V., and Petrishchevskii, S.: Results of the analysis of VLF and ULF perturbations and modeling atmosphere-ionosphere coupling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10488, https://doi.org/10.5194/egusphere-egu22-10488, 2022.

EGU22-10961 | Presentations | NH4.1

Regional applicability of earthquake forecasts using geoelectric statistical moments: Application to Kakioka, Japan 

Hong-Jia Chen, Katsumi Hattori, and Chien-Chih Chen

Electromagnetic anomalies have become promising for short-term earthquake forecasting. One forecasting algorithm based on statistical moments of geoelectric data was developed and applied in Taiwan. The objective of our research was to investigate such a reliable, rigorously testable algorithm to issue earthquake forecasts. We tested the applicability of the forecasting algorithm and applied it to geoelectric data and an earthquake catalog in Kakioka, Japan with a long-term period of 26 years. We calculated the variance, skewness, and kurtosis of the geoelectric data each day, determined their anomalies, and then compared them with earthquake occurrences through the forecasting algorithm. We observed that the anomalies of variance, skewness, and kurtosis significantly precede earthquakes, suggesting that the geoelectric data distributions deviate from normal distributions before earthquakes. Furthermore, the forecasting algorithm can select robust optimal models and produce explicit forecasting probability for two-thirds of all experimental cases. Therefore, we concluded that the forecasting algorithm based on statistical moments of geoelectric data is universal and may contribute to short-term earthquake forecasting.

How to cite: Chen, H.-J., Hattori, K., and Chen, C.-C.: Regional applicability of earthquake forecasts using geoelectric statistical moments: Application to Kakioka, Japan, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10961, https://doi.org/10.5194/egusphere-egu22-10961, 2022.

EGU22-11299 | Presentations | NH4.1

b-value and kinematic parameters from 3D focal mechanisms distributions in Southern California 

Andrea Carducci, Antonio Petruccelli, Angelo De Santis, Rita de Nardis, and Giusy Lavecchia

The frequency-magnitude relation of earthquakes, with particular attention to the b-value of Gutenberg-Richter law, is computed in Southern California. A three-dimensional grid is employed to sample relocated focal mechanisms and determine both the magnitude of completeness and the b-value for each node. Sampling radius and grid size are appropriately chosen accordingly to seismogenic source dimensions. The SCEC Community Fault Model is used for comparison of the main fault systems along with the calculated 3D distributions.

The b-values are compared to Aλ, a streamlined kinematic fault quantification, which does not use inversion processes since directly depends on individual rakes of focal mechanisms. Potential relationships between the two quantities are then computed through multiple regressions at increasing depth ranges: they may help to evaluate seismic hazard assessment in relating the frequency and size of earthquakes to kinematic features. The rheological transition from elastic to plastic conditions is computed, assuming different physical constraints, and its influence on b-value and Aλ is also analyzed. Among proposed linear, polynomial, and harmonic equations, the linear model is statistically valued as the most probable one to relate the two parameters at different depth ranges. b-values against Aλ results are implemented into a 3D figure, where point data are interpolated by “Lowess Smoothing” surfaces to visually check the constancy depending on depth.

How to cite: Carducci, A., Petruccelli, A., De Santis, A., de Nardis, R., and Lavecchia, G.: b-value and kinematic parameters from 3D focal mechanisms distributions in Southern California, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11299, https://doi.org/10.5194/egusphere-egu22-11299, 2022.

EGU22-11511 | Presentations | NH4.1 | Highlight

Earthquake forecasting probability by statistical correlations between low to moderate seismic events and variations in geochemical parameters 

Lisa Pierotti, Cristiano Fidani, Gianluca Facca, and Fabrizio Gherardi

Since late 2002, a network of six automatic monitoring stations is operating in Tuscany, Central Italy, to investigate possible geochemical precursors of earthquakes. The network is operated by the Institute of Geosciences and Earth Resources (IGG), of the National Research Council of Italy (CNR), in collaboration and with the financial support of the Government of the Tuscany Region. The areas of highest seismic risk of the region, Garfagnana, Lunigiana, Mugello, Upper Tiber Valley and Mt. Amiata, are currently investigated. The monitoring stations are equipped with multi-parametric sensors to measure temperature, pH, electric conductivity, redox potential, dissolved CO2 and CH4 concentrations in spring waters. The elaboration of long-term time series allowed for an accurate definition of the geochemical background, and for the recognition of a number of geochemical anomalies in concomitance with the most energetic seismic events occurred during the monitoring period (Pierotti et al., 2017).

In an attempt to further exploit data from the geochemical network of Tuscany in a seismic risk reduction perspective, here we present a new statistical analysis that focuses on the possible correlation between low to moderate seismic events and variations in the chemical-physical parameters detected by the monitoring network. This approach relies on the estimate of a conditional probability for the forecast of earthquakes from the correlation coefficient between seismic events and signals variations (Fidani, 2021).

Seismic events (EQ) are classified according to a magnitude threshold, Mo. We set EQ = 0, if no seismic events were observed with M < Mo, and EQ = 1, if at least a seismic event was observed with M > Mo. Chemical-physical (CP) events were defined based on their appropriate amplitudes threshold Ao, being CP = 0 if the amplitude A < Ao, and CP = 1 if A > Ao. Digital time series were elaborated from data collected over the last 10 years, where EQs were declustered and CPs detrended for external influences. The couples of events with the same time differences TEQ – TCP, between EQs and CPs, were summed in a histogram. Then, a Pearson statistical correlation coefficient corr(EQ,CP) was obtained starting from the covariance definition.

A conditional probability for EQ forecasting is estimated starting from the correlation coefficient in an attempt to use data from CP network of Tuscany in a seismic risk reduction framework. The approach consists in an evaluation of EQ probability in a defined area, given a CP detection by the station in the same area. The conditional probability P(EQCP), when a correlation between EQs and CPs exists and time difference is that evidenced by the correlation, is increased by a term proportional to the correlation coefficient as

 

with respect to the unconditioned probability P(EQ) when a CP event is detected, where P(CP) is the unconditioned probability of CP.

 

 

Fidani, C. (2021). Front. Earth Sci. 9:673105.

Pierotti, L. et al. (2017). Physics and Chemistry of the Earth, Parts A/B/C, 98, 161-172.

 

How to cite: Pierotti, L., Fidani, C., Facca, G., and Gherardi, F.: Earthquake forecasting probability by statistical correlations between low to moderate seismic events and variations in geochemical parameters, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11511, https://doi.org/10.5194/egusphere-egu22-11511, 2022.

Some samples are given illustrating possible influences of the natural hazards those will occur in the future times for the seismic activities those occur at the present time in [1]. Those samples force to ask whether there exist operational connections originating from future time’s naturel events, NEs on the present time’s NEs or do not. The analytical basics orienting such cooperation are derived in here [2]-[3].

Both the past time’s NEs and the future time’s NEs are not exist at the present time’s NEs topology when we want to observe and measure all them at the same location in the present time as a matter of the event for the present time’s temporal and spatial metric or in a space-time differential displacement with other words [4]. This situation brings the fact on the absence and/or presence of NEs in a temporal topology as a principle about the occurrence of NEs in their specific manifolds [4]. The very simple example in below may be helpful to understand the fact:

Example 1: If you want to be a medical doctor in your future then you have to study and learn medical facts in an official way. Without doing this in your past times and present times you cannot earn the medical doctor degree in your future times.

Result 1: The future time’s NEs present cooperation in both the past and future time’s NEs.

Example 1 and connected result 1 illustrate the future time’s event of being medical doctor operates the past and present time’s event of learning medicine so the principle 1 in below brings the processes designing the cooperation among past, present, and future NEs:

Principle 1: There is either definitive and/or fuzzy cooperation among the NEs in the future time, pas time, and present time for NEs’ topology.

The retarded potential in gauge form is split into two parts: The first part is a part of Fourier transform given the future time’s NEs and the second part is a Fourier sinus transform. The first part involves the ingredients of future time’s NEs. The second part involves the ingredients of both NEs of past time and present time. The first part has the property as a forwarded potential. The second part fits to the properties as the events at the past and/or the present.

The principle 1 is checked during several earthquakes received in 1999-2004 [5]- [6] and some important results are shared in [1]. The present writer calls virtual earthquake (VEQ) future time’s earthquake activities cooperating with the past and/or present time’s seismic activities and presents the topological processes with their analytical extractions from the above-mentioned observations.

-------------------

1Sengor T, http://meetingorganizer.copernicus.org/ EGU2020/EGU2020-22589.pdf.

2Sengor T, Helsinki University of Tech., Electromagnetics Lab. Report 344, Nov. 2000, ISBN 951-22-5258-9, ISSN 1456-632X.

3Sengor T, Helsinki University of Tech., Electromagnetics Lab. Report 347, Dec. 2000, ISBN 951-22-5274-0, ISSN 1456-632X.

4Sengor T, Invited paper. doi:10.23919/URSI- ETS.2019.8931455

5Sengor T, http://meetingorganizer.copernicus.org/EGU2019/EGU2019-17127.pdf.

6Sengor T, Helsinki University of Tech., Electromagnetics Lab. Report 368, May. 2001, ISBN 951-22-5275-1, ISSN 1456-632X.

How to cite: Sengor, T.: Virtual Earthquakes Cooperating with Natural Hazards and Simultaneously Scheduled Seismic Activities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12275, https://doi.org/10.5194/egusphere-egu22-12275, 2022.

EGU22-12349 | Presentations | NH4.1 | Highlight

Multi-channel singular spectrum analysis of soil radon concentration, Japan: Relationship between soil radon flux and precipitation and the local seismic activity 

Katsumi Hattori, Kazuhide Nemoto, Haruna Kojina, Akitsugu Kitade, Shu kaneko, Chie Yoshino, Toru Mogi, Toshiharu Konishi, and Dimitar Ouzounov

Recently, there are many papers on electromagnetic pre-earthquake phenomena such as geomagnetic, ionospheric, and atmospheric anomalous changes. Ionospheric anomaly preceding large earthquakes is one of the most promising phenomena. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model has been proposed to explain these phenomena. In this study, to evaluate the possibility of chemical channel of LAIC by observation, we have installed sensors for atmospheric electric field, atmospheric ion concentration, atmospheric Rn concentration, soil radon Rn concentration (SRC), and weather elements at Asahi station, Boso, Japan. Since the atmospheric electricity parameters are very much influenced by weather factors, it is necessary to remove these effects as much as possible. In this aim, we apply the MSSA (Multi-channel Singular Spectral Analysis) to remove these influences from the variation of GRC and estimate the soil Rn flux (SRF). We investigated the correlations (1) between SRF and precipitation and (2) between SRF and the local seismic activity around Asahi station. The preliminary results show that SRF was significantly increased by heavy precipitations of 20 mm or more in total for 2 hours. We proposed two types of models, a rainwater load model and a rainwater infiltration model, and it is appropriate for both models to work and (2) between SRF and local seismicity within an epicenter distance of 50 km from the station.

 

How to cite: Hattori, K., Nemoto, K., Kojina, H., Kitade, A., kaneko, S., Yoshino, C., Mogi, T., Konishi, T., and Ouzounov, D.: Multi-channel singular spectrum analysis of soil radon concentration, Japan: Relationship between soil radon flux and precipitation and the local seismic activity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12349, https://doi.org/10.5194/egusphere-egu22-12349, 2022.

Improving streamflow forecasts helps in reducing socio-economical impacts of hydrological-related damages. Among them, improving hydropower production is a challenge, even more so in a context of climate change. Deep learning models drew the attention of scientists working on forecasting models based on physical laws, since they got recognition in other domains. Artificial Neural Network (ANN) offer promising performance for streamflow forecasts, including good accuracy and lesser time to run compared to traditional physically-based models. 

 

The objective of this study is to compare different spatial discretization schemes of inputs in an ANN model for streamflow forecast. The study focuses on the “Au Saumon” watershed in Southern Quebec (Canada) during summer periods, with a forecast window of 7 days at a daily timestep. Parameterization of the ANN was a key preliminary step: the number of neurons in the hidden layer was first optimized, leading to 6 neurons. The model was trained on a 11-year dataset (2000-2005 and 2007-2011) followed by model validation on one dry (2012) and one wet (2006) year to take into account extreme hydrologic regimes. 

 

To lead this study, the physically-based hydrological ‘Hydrotel’ model is the reference to compare our results. The model defines watershed heterogeneity using hydrological units based on land uses, soil types, and topography, called Relative Homogeneous Hydrological Units (RHHU). The Nash-Sutcliffe Efficiency score (NSE) is the main evaluation criteria calculated. In a preliminary step, we have to ensure the ANN model can satisfactorily mimic Hydrotel. With the same model inputs, that is same variables and same spatial discretizations of variables (total precipitation, daily maximum and minimum temperatures, and soil surface humidity), the ANN forecasts were found to be better than those of Hydrotel for one to 7-day forecasts. 

 

Three different watershed spatial discretizations were tested: global, fully distributed, and semi-distributed. For the global model, hydrometeorological data used as inputs to the ANN model were averaged across all RHHUs. The complexity is reduced with loss of spatial details and heterogeneity. For the fully distributed model, a regular grid was defined with six cells of 28x28km2 covering all the watershed. For the semi-distributed model, spatial distribution of the input data was that of the RHHUs. For this discretization, the state variables (soil moisture and outflow) were updated at each forecast timestep, whether on all RHHUs, or only on the RHHU of the outlet.

 

Depending on the spatial discretization of inputs used, the accuracy differed. The fully distributed model offered the least performance, with NSE values of 0.85 ,while the global model surprisingly performed better with a 0.93 NSE. Moreover, updating soil moisture on all the RHHUs of the semi-distributed model improved the NSE across the entire window of forecast.

This research will assess the ANN model performance developed using ERA5-land precipitation and temperature reanalysis and ground observations of soil moisture. Given the promising results obtained with the fully and semi distributed models, our ANN model will be tested with state variables retrieved from satellite data, such as surface soil moisture from SMAP and SMOS missions.



How to cite: Buire, M., Ahlouche, M., Jougla, R., and Leconte, R.: Forecasting streamflow using Artificial Neural Network (ANN) with different spatial discretizations of the watershed : use case on the Au Saumon watershed in Quebec (Canada)., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1767, https://doi.org/10.5194/egusphere-egu22-1767, 2022.

EGU22-2254 | Presentations | HS3.4

Flood forecasting using sensor network and Support Vector Machine model 

Jakub Langhammer

Machine learning has shown great promise for hydrological modeling because, unlike conventional approaches, it allows efficient processing of big data provided by the recent automatic monitoring networks. This research presents the Support Vector Machine (SVM) model designed for modeling floods in a montane environment based on data from a distributed automated sensor network. The study aimed to test the reliability of the SVM model to predict the different types of flood events occurring in the environment of a mid-latitude headwater basin, experiencing the effects of climate and land use change. 

The sensor network uses four hydrological and two meteorological stations, located in headwaters of the montane basin of Vydra, experiencing intense forest disturbance, a rise in air temperatures, and frequent occurrence of flood events. Automated hydrological stations are operating in the study area for ten years, recording the water levels in a 10-minute interval with online access to data. Meteorological stations monitor air temperatures, precipitation, and snow cover depth at the same time step. 

The model network was built using the Support Vector Machines (SVM), particularly the nu-SVR algorithm, employing the LibSVM library. The network was trained and validated on a complex sample of hydrological observations and tested on the scenarios covering different types of extreme events. The simulation scenarios included the floods from a single summer storm, recurrent storms, prolonged regional rain, snowmelt, and a rain-on-snow event. 

The model proved the robustness and good performance of the data-driven SVM model to simulate hydrological time series. The RMSE model performance ranged from 0,91-0,97 for individual scenarios, without substantial errors in the fit of the trend, timing of the events, peak values, and flood volumes. The model reliably reconstructed even the complex flood events, such as rain on snow episodes and flooding from recurrent precipitation. 

The research proved that the data-driven SVM model provides a reliable and robust tool for simulating flood events from sensor network data. The model proved reliability in a montane environment featuring rapid runoff generation, transient environmental conditions, and variability of flood event types. The SVM model proved to efficiently handle big data volumes from sensor networks and, under such conditions, is a promising approach for operational flood forecasting and hydrological research. 

How to cite: Langhammer, J.: Flood forecasting using sensor network and Support Vector Machine model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2254, https://doi.org/10.5194/egusphere-egu22-2254, 2022.

EGU22-3281 | Presentations | HS3.4

Assessment of Transfer Learning Techniques to Improve Streamflow Predictions in Data-Sparse Regions 

Yegane Khoshkalam, Farshid Rahmani, Alain N. Rousseau, Kian Abbasnezhadi, Chaopeng Shen, and Etienne Foulon

Reliable streamflow predictions are critical for managing water resources for flood warning, agricultural irrigation apportionment, hydroelectric production, to name a few. However, there are geographical heterogeneities in available observed streamflow data, river basin geophysical attributes, and meteorological data to support such predictions. Moreover, in data-sparse regions, both process-based and data-driven models have difficulties in being sufficiently calibrated or trained; increasing the difficulty to achieve satisfactory predictions. That being mentioned, it is possible to transfer knowledge from regions with dense and available measured data to data-sparse regions. In earlier work, we have shown that transfer learning based on a long short-term memory (LSTM) network, pre-trained over the conterminous United States, could improve daily streamflow prediction in Quebec (Canada) when compared to a semi-distributed hydrological model (HYDROTEL). The dataset used for pre-training (source dataset) was the Catchment Attributes and Meteorology for Large-sample Studies (CAMELS), while the data for the basins located at the target locations (local dataset) were extracted from the Hydrometeorological Sandbox-École de Technologie Supérieure (HYSETS). Both datasets provide access to various types of information with different spatial resolutions. While HYSETS is generally spanning from 1950 to 2018, the temporal interval for most of the basins reported in CAMELS goes back to 1980. The types of data included in both CAMELS and HYSETS include daily meteorological data (precipitation, temperature, etc.), streamflow observations, and basins physiographic attributes (i.e., considered time-invariant or static). In this work, the techniques applied to further improve streamflow simulations included the use of: (i) streamflow observations and simulated flows from HYDROTEL as input to the LSTM model, (ii) different forcing (meteorological data) and static attribute data from the source and the local datasets, and (iii) additional basins from HYSETS with similar climatological features for model training. The ultimate goal was to improve the accuracy of the predicted hydrographs with an emphasis on enhancing the prediction of peak flows by transfer learning while using the Kling-Gupta efficiency (KGE) and Nash-Sutcliffe efficiency (NSE) metrics. This investigation has revealed the benefits of using transfer learning techniques based on deep learning models to improve streamflow predictions when compared to the application of a distributed hydrological models in data-sparse regions.

How to cite: Khoshkalam, Y., Rahmani, F., Rousseau, A. N., Abbasnezhadi, K., Shen, C., and Foulon, E.: Assessment of Transfer Learning Techniques to Improve Streamflow Predictions in Data-Sparse Regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3281, https://doi.org/10.5194/egusphere-egu22-3281, 2022.

EGU22-3661 | Presentations | HS3.4

Neural ODEs in Hydrology: Fusing Conceptual Models with Deep Learning for Improved Predictions and Process Understanding 

Marvin Höge, Andreas Scheidegger, Marco Baity-Jesi, Carlo Albert, and Fabrizio Fenicia

Deep learning methods have repeatedly proven to outperform conceptual hydrologic models in rainfall-runoff modelling. Although attempts of investigating the internals of such deep learning models are being made, traceability of model states and processes and their interrelations to model input and output is not fully given, yet. Direct interpretability of mechanistic processes has always been considered as asset of conceptual models that helps to gain system understanding aside of predictability. We introduce hydrologic Neural ODE models that perform as well as state-of-the-art deep learning methods in rainfall-runoff prediction while maintaining the ease of interpretability of conceptual hydrologic models. In Neural ODEs, model internal processes that are typically implemented in differential equations by hard-coding are substituted by neural networks. Therefore, Neural ODE models offer a way to fuse deep learning with mechanistic modelling yielding time-continuous solutions. We demonstrate the basin-specific predictive capability for several hundred catchments of the continental US, and exemplarily give insight to what the neural networks within the ODE models have learned about the model internal processes. Further, we discuss the role of Neural ODE models on the middle ground between pure deep learning and pure conceptual hydrologic models.

How to cite: Höge, M., Scheidegger, A., Baity-Jesi, M., Albert, C., and Fenicia, F.: Neural ODEs in Hydrology: Fusing Conceptual Models with Deep Learning for Improved Predictions and Process Understanding, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3661, https://doi.org/10.5194/egusphere-egu22-3661, 2022.

EGU22-3825 | Presentations | HS3.4

Karst spring discharge modeling based on deep learning using spatially distributed input data 

Andreas Wunsch, Tanja Liesch, Guillaume Cinkus, Nataša Ravbar, Zhao Chen, Naomi Mazzillli, Hervé Jourde, and Nico Goldscheider

Despite many existing approaches, modeling karst water resources remains challenging and often requires solid system knowledge. Artificial Neural Network approaches offer a convenient solution by establishing a simple input-output relationship on their own. However, in this context, temporal and especially spatial data availability is often an important constraint, as usually no or few climate stations within a karst spring catchment are available. Hence spatial coverage is often unsatisfying and can introduce severe uncertainties. To overcome these problems, we use 2D-Convolutional Neural Networks (CNN) to directly process gridded meteorological data followed by a 1D-CNN to perform karst spring discharge simulation. We investigate three karst spring catchments in the Alpine and Mediterranean region with different meteorologic-hydrological characteristics and hydrodynamic system properties. We compare our 2D-models both to existing modeling studies in these regions and to own 1D-models that are conventionally based on climate station input data. Our results show that our models are excellently suited to model karst spring discharge and rival the simulation results of existing approaches in the respective areas. The 2D-models show a better fit than the 1D-models in two of three cases, learn relevant parts of the input data themselves and by performing a spatial input sensitivity analysis we can further show their usefulness to localize the position of karst catchments.

How to cite: Wunsch, A., Liesch, T., Cinkus, G., Ravbar, N., Chen, Z., Mazzillli, N., Jourde, H., and Goldscheider, N.: Karst spring discharge modeling based on deep learning using spatially distributed input data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3825, https://doi.org/10.5194/egusphere-egu22-3825, 2022.

EGU22-3946 | Presentations | HS3.4

Graph Neural Networks for Reservoir Level Forecasting and Draught Identification 

Aiden Durrant, David Haro, and Georgios Leontidis

The management of water resource systems is a longstanding and inherently complex problem, balancing an increasing number of interests to meet short- and long-term objectives sustainably. The difficulty of analyzing large-scale, multi-reservoir water systems is compounded by the scale and interpretation of the historic data. Therefore, to assist in the decision-making processes for water allocation we propose the use of machine learning, specifically deep learning to uncover and interpret relationships in high-dimensional data that can enable more accurate forecasting.   

We explore the problem of reservoir level prediction as a pilot study, comparing traditional machine learning approaches to our proposal of spatial-temporal graph neural networks that embed the topological nature of the water system. The graph convolutional neural network explicitly captures spatial interaction among segments of river within the system. The construction of the graph is as follows: nodes represent the reservoir and river monitoring stations; edges define the characteristics of the river sections connecting these stations (i.e. distance, flow, etc.); multiple states of the aforementioned graph, each at different measurement intervals. We then train the network to predict the water level of a node (reservoir measurement station) from previous time intervals. The proposed network is trained on historic data of the EBRO basin, Spain, from 1981 to 2018, specifically utilizing river and reservoir gauging station flow rate and fill level respectively, with the addition of characteristics defining each component of the water system. 

We validate our approaches over a 4-year period, making predictions across various time frames, showing the robustness to various circumstances, and meeting necessary objective requirements ranging from daily to monthly forecasting. As an extension, we also investigate the use of our predictions to allow for drought identification, demonstrating just one of many use-cases where machine learning can uncover vital information that can lead to better management and planning decisions. 

How to cite: Durrant, A., Haro, D., and Leontidis, G.: Graph Neural Networks for Reservoir Level Forecasting and Draught Identification, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3946, https://doi.org/10.5194/egusphere-egu22-3946, 2022.

EGU22-4303 | Presentations | HS3.4

Fast and detailed emulation of urban drainage flows using physics-guided machine learning 

Roland Löwe, Rocco Palmitessa, Allan Peter Engsig-Karup, and Morten Grum

Hydrodynamic models (numerical solutions of the Saint Venant equations) are at the core of simulating water movements in natural streams and drainage systems. They enable realistic simulations of water movement and are directly linked to physical system characteristics such as channel slope and diameter. This feature is important for man-made drainage structures as it enables straightforward testing of the effects of varying channel designs. In cities, models with hundreds up to tens of thousands of pipes are commonly used for drainage infrastructure. Their computational expense remains high and they are not suited for a systematic screening of design options, discussing water management options in workshops, as well as many real-time applications such as data assimilation.

Hydrologists have developed many approaches to enable faster simulations. All of these do, however, compromise on the physical detail of the simulated processes (for example, by simulating only flows using linear reservoirs), and usually also on the spatial and temporal resolution of the models (for example, by simulating only flows between key points in the system). The link to physical system characteristics is thus lost. Therefore, it is challenging to incorporate such approaches into planning workflows where changing city plans require a constant revision of water management options.

Recent advances in scientific machine learning enable the creation of fast machine learning surrogates for complex systems that preserve a high spatio-temporal detail and a physically accurate simulation. We present such an approach that employs generalized residue networks for the simulation of hydraulics in drainage systems. The key concept is to train neural networks that learn how hydraulic states (level, flow and surcharge volume) at all nodes and pipes in the drainage network evolve from one time step to another, given a set of boundary conditions (surface runoff). Training is performed against the output of a hydrodynamic model for a short time series.

Once trained, the surrogates generate the same results as a hydrodynamic model in the same level of detail, and they can be used to quickly simulate the effect of many different rain events and climate scenarios. Considering pipe networks with 50 to 100 pipes, our approach achieves NSE values in the order of 0.95 for the testing dataset. Simulations are performed 10 to 50 times faster than the hydrodynamic model. Training times are in the order of 25 minutes on a single CPU. The surrogates are system specific and need to be retrained when the physical system changes. To minimize this overhead, we train surrogates for small subsystems which can subsequently be linked into a model for a large drainage network.

Our approach is an initial application of scientific machine learning for the simulation of hydraulics that is readily combined with other recent developments. Future research should, in particular, explore the application of physics-informed loss functions for bypassing the generation of training data from hydrodynamic simulations, and of graph neural networks to exploit spatial correlation structures in the pipe network.

How to cite: Löwe, R., Palmitessa, R., Engsig-Karup, A. P., and Grum, M.: Fast and detailed emulation of urban drainage flows using physics-guided machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4303, https://doi.org/10.5194/egusphere-egu22-4303, 2022.

EGU22-4639 | Presentations | HS3.4

Mapping Sweden’s drainage ditches using deep learning and airborne laser scanning 

William Lidberg, Siddhartho Paul, Florian Westphal, and Anneli Ågren

Drainage ditches are common forestry practice across northern European boreal forests and in some parts of North America. Ditching helps with lowering the groundwater level in the wet parts of the forest to improve soil aeration and to support tree growth. However, the intensive ditching practice pose multidimensional environmental risks, particularly for degradation of wetland and soil, greenhouse gas, increased nutrient and sediment loadings to water bodies, as well as biodiversity loss. At the same time there is a discrepancy between the potential significance of artificial water bodies, such as drainage ditches and their low representation in scientific research and water management policy. A comparison with a national inventory of Sweden showed that only 9 % of drainage ditches are present on the best avalible map of Sweden. The increasing understanding of the environmental risks associated with forest ditches together with the poor representation of ditch networks in existing maps of many forest landscapes makes detailed mapping of these ditches a priority for sustainable land and water management. Here, we combine two state-of-the-art technologies – airborne laser scanning and deep learning - for detecting drainage ditches on a national scale.

 

A deep neural network was trained on airborne laser scanning data and 1607 km of manually digitized ditch channels from 10 regions spread across Sweden. 20 % of the data was set aside for testing the model.  The model correctly mapped 82 % of all small drainage channels in the test data with a Matthew's correlation coefficient of 0.72. This approach only requires one topographical index, a high pass median filter calculated from a digital elevation model with a 1 m spatial resolution. This made it possible to scale up over large areas with limited computational resources and the trained model was implemented using Microsoft Azure to map ditch channels across all of Sweden. The total mapped channel length was 970 00 km (equivalent to 24 times around the world). Visual inspection indicated that this method also classifies natural stream channels as drainage channels, which suggests that a deep neural network can be trained to detect natural stream channels in addition to drainage ditches. The model only required one topographical index which makes it possible to implement this approach in other areas with access to high resolution digital elevation data.

How to cite: Lidberg, W., Paul, S., Westphal, F., and Ågren, A.: Mapping Sweden’s drainage ditches using deep learning and airborne laser scanning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4639, https://doi.org/10.5194/egusphere-egu22-4639, 2022.

EGU22-5130 | Presentations | HS3.4

Advancing drought monitoring via feature extraction and multi-task learning algorithms 

Matteo Giuliani, Paolo Bonetti, Alberto Maria Metelli, Marcello Restelli, and Andrea Castelletti

A drought is a slowly developing natural phenomenon that can occur in all climatic zones and can be defined as a temporary but significant decrease in water availability. Over the past three decades, the cost of droughts in Europe amounted to over 100 billion euros, with the recent summer droughts being unprecedented in the last 2,000 years. Although drought monitoring and management are extensively studied in the literature, capturing the evolution of drought dynamics, and associated impacts across different temporal and spatial scales remains a critical, unsolved challenge.

In this work, we contribute with a Machine Learning procedure named FRIDA (FRamework for Index-based Drought Analysis) for the identification of impact-based drought indexes. FRIDA is a fully automated data-driven approach that relies on advanced feature extraction algorithms to identify relevant drought drivers from a pool of candidate hydro-meteorological predictors. The selected predictors are then combined into an index representing a surrogate of the drought conditions in the considered area, including either observed or simulated water deficits or remotely sensed information on crop status. Notably, FRIDA leverages multi-task learning algorithms to upscale the analysis over a large region where drought impacts might depend on diverse but potentially correlated drivers. FRIDA captures the heterogeneous features of the different sub-regions while efficiently using all available data and exploiting the commonalities across sub-regions. In this way, the accuracy of the resulting prediction benefits from a reduced uncertainty compared to training separate models for each sub-region. Several real-world examples will be used to provide a synthesis of recent applications of FRIDA in case studies featuring diverse hydroclimatic conditions and variable levels of data availability.

How to cite: Giuliani, M., Bonetti, P., Metelli, A. M., Restelli, M., and Castelletti, A.: Advancing drought monitoring via feature extraction and multi-task learning algorithms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5130, https://doi.org/10.5194/egusphere-egu22-5130, 2022.

EGU22-6191 | Presentations | HS3.4

An operational framework for data driven low flow forecasts in Flanders 

Tim Franken, Cedric Gullentops, Vincent Wolfs, Willem Defloor, Pieter Cabus, and Inge De Jongh

Belgium is ranked 23rd out of 164 countries in water scarcity and the third highest in Europe according to the Water Resource Institute. The warm and dry summers of the past few years have made it clear that Flanders has little if any buffer to cope with a sharp increase in water demand or a prolonged period of dry weather. To increase the resilience and preparedness against droughts, we developed the framework named hAIdro: an operational early warning system for low flows that allows to take timely, local and effective measures against water shortages. Data driven rainfall-runoff models are at the core of the forecasting system that allows to forecast droughts up to 30 days ahead.

The architecture of the data driven hydrological models are inspired by the Multi-Timescale Long Short Term Memory (MTS-LSTM, [1]) that allow to integrate past and future data in one prediction pipeline. The model architecture consists of 3 LTSM’s that are organized in a branched structure. The historical branch processes the historical meteorological data, remote sensing data and static catchment features into encoded state vectors. These are passed through fully connected layers to both a daily and an hourly forecasting branch which are used to make runoff predictions on short (72 hours) and long (30 days) time horizons. The forecasting branches are fed with forecasts of rainfall and temperature, static catchment features and discharge observations. The novelty of the proposed model structure lies in the way discharge observations are incorporated. Only the most recent discharge observations are used in the forecasting branches to minimize the consequences of missing discharge observations in an operational context. The models are trained using a weighted Nash-Sutcliffe Efficiency (NSE) as objective function that puts additional emphasis on low flows. Results show that the newly created data driven models perform well compared to calibrated lumped hydrological PDM models [2] for various performance metrics including Log-NSE and NSE.

We developed a custom cloud-based operational forecasting system, called hAIdro to bring the data driven hydrological models in production. hAIdro processes large quantities of local meteorological measurements, radar rainfall data and ECMWF extended range forecasts to make probabilistic forecasts up to 30 days ahead. hAIdro has been forecasting the runoff twice a day for 262 locations spread over Flanders since April 2021. A continuous monitoring and evaluation framework provides valuable insights in the online model performance and the informative value of hAIdro.

[1] M. Gauch, F. Kratzert, D. Klotz, G. Nearing, J. Lin, and S. Hochreiter. “Rainfall–Runoff Prediction at Multiple Timescales with a Single Long Short-Term Memory Network.” Hydrol. Earth Syst. Sci., 25, 2045–2062, 2021 

[2] Moore, R. J. “The PDM rainfall-runoff model.” Hydrol. Earth Syst. Sci., 11, 483–499,  2007

How to cite: Franken, T., Gullentops, C., Wolfs, V., Defloor, W., Cabus, P., and De Jongh, I.: An operational framework for data driven low flow forecasts in Flanders, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6191, https://doi.org/10.5194/egusphere-egu22-6191, 2022.

EGU22-6231 | Presentations | HS3.4

Potential of natural language processing for metadata extraction from environmental scientific publications 

Guillaume Blanchy, Lukas Albrecht, Johannes Koestel, and Sarah Garré

Adapting agricultural management practices to changing climate is not straightforward. Effects of agricultural management practices (tillage, cover crops, amendment, …) on soil variables (hydraulic conductivity, aggregate stability, …) often vary according to pedo-climatic conditions. Hence, it is important to take these conditions into account in quantitative evidence synthesis. Extracting structured information from scientific publications to build large databases with experimental data from various conditions is an effective way to do this. This database can then serve to explain, and possibly also to predict, the effect of management practices in different pedo-climatic contexts.

However, manually building such a database by going through all publications is tedious. And given the increasing amount of literature, this task is likely to require more and more effort in the future. Natural language processing facilitates this task.  In this work, we built a database of near-saturated hydraulic conductivity from tension-disk infiltrometer measurements from scientific publications. We used tailored regular expressions and dictionaries to extract coordinates, soil texture, soil type, rainfall, disk diameter and tensions applied. The overal results have an F1-score ranging from 0.72 to 0.91.

In addition, we extracted relationships between a set of driver keywords (e.g. ‘biochar’, ‘zero tillage’, …) and variables (e.g. ‘soil aggregate’, ‘hydraulic conductivity’, …) from publication abstracts based on the shortest dependency path between them. The relationships were further classified according to positive, negative or absent correlations between the driver and variable. This technique quickly provides an overview of the different driver-variable relationships and their abundance for an entire body of literature. For instance, we were able to recover the positive correlation between biochar and yield, as well as its negative correlation with bulk density.

How to cite: Blanchy, G., Albrecht, L., Koestel, J., and Garré, S.: Potential of natural language processing for metadata extraction from environmental scientific publications, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6231, https://doi.org/10.5194/egusphere-egu22-6231, 2022.

EGU22-6362 | Presentations | HS3.4

Flood Forecasting With LSTM Networks: Enhancing the Input Data With Statistical Precipitation Information 

Tanja Morgenstern, Jens Grundmann, and Niels Schütze

Reliable forecasts of water level and discharge are necessary for efficient disaster management in case of a flood event. The methods of flood forecasting are rapidly developing, part of this being artificial neural networks (ANN). These belong to the data-driven models and therefore are sensitive to the quality, quantity and relevance of their input and training data.

Previous studies at the Institute of Hydrology and Meteorology at the TU Dresden used both hourly discharge and precipitation time series to model the precipitation-runoff process with ANN, e.g. Deep Learning LSTM networks (Long Short-Term Memory – a subcategory of ANN). The precipitation data were derived of area averages of radar data, in which the spatial structure of the precipitation and thus important information for rainfall-runoff modelling is lost. This is a problem especially for small-scale convective rainfall events.

As part of the KIWA project, we carry out a study with the aim of improving the reliability of flood forecasts of our LSTM networks by supplementing the input data with statistical precipitation information. For this purpose, we are adding statistical information such as area maximum and minimum of precipitation intensity, as well as its standard deviation over the area, to the area mean values of precipitation from the hourly radar data.

As this information contains details on the precipitation intensity distribution over the area, we expect an improvement of the discharge prediction quality, as well as an improvement of the timing. In addition, we expect the LSTM network to learn from the statistical information to better assess the relevance and quality of the given precipitation values and to recognize the spatial uncertainties inherent to the area means. The resulting knowledge of the network should now enable it to forecast the discharge while communicating information on the uncertainty of the current discharge forecast.

We present the preliminary results of this investigation based on small pilot catchments in Saxony (Germany) with differing hydrological and geographical characteristics.

How to cite: Morgenstern, T., Grundmann, J., and Schütze, N.: Flood Forecasting With LSTM Networks: Enhancing the Input Data With Statistical Precipitation Information, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6362, https://doi.org/10.5194/egusphere-egu22-6362, 2022.

EGU22-8471 | Presentations | HS3.4

Machine learning-derived predictions of river flow across Switzerland 

Etienne Fluet-Chouinard, William Aeberhard, Eniko Szekely, Massimilano Zappa, Konrad Bogner, Sonia I. Seneviratne, and Lukas Gudmundsson

The prediction of streamflow in gauged and ungauged basins is a central challenge of hydrology and is increasingly being met by machine learning and deep learning models. With increase in data volume and advances in modeling techniques, the capacity for deep learning tools to compete and complement physics-based hydrological models over a variety of settings and scales is still being explored. Here, we present initial results of the MAchine learning for Swiss (CH) river FLOW estimation (MACH-Flow) project. We train machine learning models on daily discharge data from 260 gauging stations across Switzerland covering the 1980-2020 time window. The river gauging stations we included have catchment areas ranging between 0.1-3000 km2, and average streamflow between 0.1-100 m3/second. We also test a range of predictor features including: air temperature, precipitation, incoming radiation, relative humidity, as well as a number of static catchment variables. We evaluated multiple model architectures of ranging complexity, from models focusing on runoff predictions over individual headwater catchments, such as Neural Network, Long short-term memory (LSTM) cells. We also investigate Graph Neural Networks capable of leveraging information from neighbouring stations in making point location predictions. Predictions are generated at gauging locations as well as over 307 land units used for drought monitoring. We benchmark and compare deep learning methods against two process-based hydrological models: 1) the PREecipitation Runoff EVApotranspiration HRU Model (PREVAH) used operationally by Swiss federal agencies and 2) the comparatively streamlined Simple Water Balance Model (SWBM). We compared the deep learning and physics-based models with regards to predicting daily river discharge as well as of low-flows during drought conditions that are essential for water managers and planners in Switzerland. We find that most deep learning methods with sufficient tuning and lookback periods can compete with the streamflow predictions from process-based models, particularly at gauging stations on larger non-regulated rivers where hydro-dynamic time lags are significant. Finally, we discuss the prospects for generating discharge predictions across all river segments of Switzerland using deep learning methods, along with challenges and opportunities to achieve this goal.

How to cite: Fluet-Chouinard, E., Aeberhard, W., Szekely, E., Zappa, M., Bogner, K., I. Seneviratne, S., and Gudmundsson, L.: Machine learning-derived predictions of river flow across Switzerland, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8471, https://doi.org/10.5194/egusphere-egu22-8471, 2022.

EGU22-9771 | Presentations | HS3.4

Simulating and multi-step reforecasting real-time reservoir operation using combined neural network and distributed hydrological model 

Chanoknun Wannasin, Claudia Brauer, Remko Uijlenhoet, Paul Torfs, and Albrecht Weerts

Reservoirs and dams are essential infrastructures for human utilization and management of water resources; yet modelling real-time reservoir operation and controlled reservoir outflow remains a challenge. Artificial intelligence techniques, especially machine learning and deep learning, have become increasingly popular in hydrological forecasting, including reservoir operation. In this study, we applied a recurrent neural network (RNN) and a long short-term memory (LSTM) to model the reservoir operation and outflow of a large-scale multi-purpose reservoir at the real-time (daily) timescale. This study aims to investigate the capabilities of RNN and LSTM models in simulating and reforecasting the real-time reservoir outflow, considering the uncertainties in model inputs, model training-testing periods, and different model algorithms. The Sirikit reservoir in Thailand was selected as a case study. The main inputs for the RNN and LSTM models were daily reservoir inflow, daily storage, and the month of the year. We applied the distributed wflow_sbm model for reservoir inflow simulation (using MSWEP precipitation data) and ensemble inflow reforecasting (using ECMWF precipitation data). Daily reservoir storage was obtained from observations and real-time recalculation based on the reservoir water balance. The models were trained and tested with 10-fold cross-validation. Results show that both RNN and LSTM models have high accuracies for real-time simulations and reasonable accuracies for multi-step reforecasts, and that LSTM exhibits better model performance in forecasting mode. The performance varied between each cross-validation, being highly related to the extreme events included in either training or test period. With further understanding of the reservoir inflow uncertainty influences on reservoir operation, we conclude that the models can be potentially applicable in real-time reservoir operation and decision-making for operational water management.

How to cite: Wannasin, C., Brauer, C., Uijlenhoet, R., Torfs, P., and Weerts, A.: Simulating and multi-step reforecasting real-time reservoir operation using combined neural network and distributed hydrological model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9771, https://doi.org/10.5194/egusphere-egu22-9771, 2022.

Modelling accurate rainfall-runoff (RR) simulations is a longstanding contest in hydrological research. These models often treat the RR relationship as stationary; in other words, model parameters are assumed to be fixed, time-invariant values. In reality, the RR relationship is continuously changing due to factors such as climate change, rapid urban growth, and construction of hydraulic infrastructure. Therefore, there is a need for hydrological models to be able to adapt to these changes.

The suitability of machine learning (ML) models for flow forecasting has been well established over the past 3 decades. One advantage of such models is their ability to rapidly and continuously adapt to the non-stationary relationship between rainfall and runoff generation. However, changes in model performance and model adaptation in an operational context have not received much attention from the research community.

We present a large-scale framework for daily flow forecasting models in Canada (>100 catchments). In our framework, local artificial neural network (ANN) ensembles models are automatically trained to forecast flow on an individual catchment basis using openly available daily hydrometeorological timeseries data. The collection of catchments taken from across Canada have highly heterogenous soil groups, land use, and climate. We propose several experiments that are designed to evaluate the robustness of ANN-based flow forecasting across time. Using the most recent year of observations for validation, we evaluate the effects of incrementally providing increasing amounts of historic observations. Similarly, we quantify changes to ANN model parameters (weights and biases) across increasing historic training data. Finally, we analyse feature importance across time using multiple feature importance algorithms. Our research aims to provide guidance on initial model training and adaptive learning, as ML-based approaches become increasingly adapted for operational use.

How to cite: Snieder, E. and Khan, U.: Large-scale evaluation of temporal trends in ANN behaviour for daily flow forecasts in Canadian catchments., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10744, https://doi.org/10.5194/egusphere-egu22-10744, 2022.

EGU22-11110 | Presentations | HS3.4

Physics-informed LSTM structure for recession flow prediction    

Prashant Istalkar, Akshay Kadu, and Basudev Biswal

Modeling the rainfall-runoff process has been a key challenge for hydrologists. Multiple modeling frameworks have been introduced with time to understand and predict the runoff generation process, including physics-based models, conceptual models, and data-driven models. In recent years the use of deep learning models like Long Short-Term Memory (LSTM) has increased in hydrology because of its ability to learn information in the sequence of input. Studies report LSTM outperforms the well-established hydrological models (e.g. SAC-SMA), which led authors to question the need for process understanding in the machine learning era. In the current study, we claim that process understanding helps to reduce LSTM model complexity and ultimately improves recession flow prediction. Here, we used past streamflow information as input to LSTM and predicted ten days of recession flow. To reduce LSTM complexity, we used insights from a conceptual hydrological model that accounts for storage-discharge dynamics. Overall, our study re-emphasizes the need to understand hydrological processes.

How to cite: Istalkar, P., Kadu, A., and Biswal, B.: Physics-informed LSTM structure for recession flow prediction   , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11110, https://doi.org/10.5194/egusphere-egu22-11110, 2022.

EGU22-591 | Presentations | ITS2.7/AS5.2

Identifying precursors for extreme stratospheric polar vortex events  using an explainable neural network 

Zheng Wu, Tom Beucler, Raphaël de Fondeville, Eniko Székely, Guillaume Obozinski, William Ball, and Daniela Domeisen

The winter stratospheric polar vortex exhibits considerable variability in both magnitude and zonal wave structure, which arises in part from stratosphere-troposphere coupling associated with tropospheric precursors and can result in extreme polar vortex events. These extremes can subsequently influence weather in the troposphere and thus are important sources of surface prediction. However, the predictability limit of these extreme events is around 1-2 weeks in the state-of-the-art prediction system. In order to explore and improve the predictability limit of the extreme vortex events, in this study, we train an artificial neural network (ANN) to model stratospheric polar vortex anomalies and to identify strong and weak stratospheric vortex events. To pinpoint the origins of the stratospheric anomalies, we then employ two neural network visualization methods, SHapley Additive exPlanations (SHAP) and Layerwise Relevance Propagation (LRP), to uncover feature importance in the input variables (e.g., geopotential height and background zonal wind). The extreme vortex events can be identified by the ANN with an averaged accuracy of 60-80%. For the correctly identified extreme events, the composite of the feature importance of the input variables shows spatial patterns consistent with the precursors found for extreme stratospheric events in previous studies. This consistency provides confidence that the ANN is able to identify reliable indicators for extreme stratospheric vortex events and that it could help to identify the role of the previously found precursors, such as the sea level pressure anomalies associated with the Siberian high. In addition to the composite of all the events, the feature importance for each of the individual events further reveals the physical structures in the input variables (such as the locations of the geopotential height anomalies) that are specific to that event. Our results show the potential of explainable neural networks techniques in understanding and predicting the stratospheric variability and extreme events, and in searching for potential precursors for these events on subseasonal time scales. 

How to cite: Wu, Z., Beucler, T., de Fondeville, R., Székely, E., Obozinski, G., Ball, W., and Domeisen, D.: Identifying precursors for extreme stratospheric polar vortex events  using an explainable neural network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-591, https://doi.org/10.5194/egusphere-egu22-591, 2022.

EGU22-676 | Presentations | ITS2.7/AS5.2

A two-stage machine learning framework using global satellite data of cloud classes for process-oriented model evaluation 

Arndt Kaps, Axel Lauer, Gustau Camps-Valls, Pierre Gentine, Luis Gómez-Chova, and Veronika Eyring

Clouds play a key role in weather and climate but are quite challenging to simulate with global climate models as the relevant physics include non-linear processes on scales covering several orders of magnitude in both the temporal and spatial dimensions. The numerical representation of clouds in global climate models therefore requires a high degree of parameterization, which makes a careful evaluation a prerequisite not only for assessing the skill in reproducing observed climate but also for building confidence in projections of future climate change. Current methods to achieve this usually involve the comparison of multiple large-scale physical properties in the model output to observational data. Here, we introduce a two-stage data-driven machine learning framework for process-oriented evaluation of clouds in climate models based directly on widely known cloud types. The first step relies on CloudSat satellite data to assign cloud labels in line with cloud types defined by the World Meteorological Organization (WMO) to MODIS pixels using deep neural networks. Since the method is supervised and trained on labels provided by CloudSat, the predicted cloud types remain objective and do not require a posteriori labeling. The second step consists of a regression algorithm that predicts fractional cloud types from retrieved cloud physical variables. This step aims to ensure that the method can be used with any data set providing physical variables comparable to MODIS. In particular, we use a Random Forest regression that acts as a transfer model to evaluate the spatially relatively coarse output of climate models and allows the use of varying input features. As a proof of concept, the method is applied to coarse grained ESA Cloud CCI data. The predicted cloud type distributions are physically consistent and show the expected features of the different cloud types. This demonstrates how advanced observational products can be used with this method to obtain cloud type distributions from coarse data, allowing for a process-based evaluation of clouds in climate models.

How to cite: Kaps, A., Lauer, A., Camps-Valls, G., Gentine, P., Gómez-Chova, L., and Eyring, V.: A two-stage machine learning framework using global satellite data of cloud classes for process-oriented model evaluation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-676, https://doi.org/10.5194/egusphere-egu22-676, 2022.

EGU22-696 | Presentations | ITS2.7/AS5.2 | Highlight

Latent Linear Adjustment Autoencoder: a novel method for estimating dynamic precipitation at high resolution 

Christina Heinze-Deml, Sebastian Sippel, Angeline G. Pendergrass, Flavio Lehner, and Nicolai Meinshausen

A key challenge in climate science is to quantify the forced response in impact-relevant variables such as precipitation against the background of internal variability, both in models and observations. Dynamical adjustment techniques aim to remove unforced variability from a target variable by identifying patterns associated with circulation, thus effectively acting as a filter for dynamically induced variability. The forced contributions are interpreted as the variation that is unexplained by circulation. However, dynamical adjustment of precipitation at local scales remains challenging because of large natural variability and the complex, nonlinear relationship between precipitation and circulation particularly in heterogeneous terrain. 

In this talk, I will present the Latent Linear Adjustment Autoencoder (LLAAE), a novel statistical model that builds on variational autoencoders. The Latent Linear Adjustment Autoencoder enables estimation of the contribution of a coarse-scale atmospheric circulation proxy to daily precipitation at high resolution and in a spatially coherent manner. To predict circulation-induced precipitation, the LLAAE combines a linear component, which models the relationship between circulation and the latent space of an autoencoder, with the autoencoder's nonlinear decoder. The combination is achieved by imposing an additional penalty in the cost function that encourages linearity between the circulation field and the autoencoder's latent space, hence leveraging robustness advantages of linear models as well as the flexibility of deep neural networks. 

We show that our model predicts realistic daily winter precipitation fields at high resolution based on a 50-member ensemble of the Canadian Regional Climate Model at 12 km resolution over Europe, capturing, for instance, key orographic features and geographical gradients. Using the Latent Linear Adjustment Autoencoder to remove the dynamic component of precipitation variability, forced thermodynamic components are expected to remain in the residual, which enables the uncovering of forced precipitation patterns of change from just a few ensemble members. We extend this to quantify the forced pattern of change conditional on specific circulation regimes. 

Future applications could include, for instance, weather generators emulating climate model simulations of regional precipitation, detection and attribution at subcontinental scales, or statistical downscaling and transfer learning between models and observations to exploit the typically much larger sample size in models compared to observations.

How to cite: Heinze-Deml, C., Sippel, S., Pendergrass, A. G., Lehner, F., and Meinshausen, N.: Latent Linear Adjustment Autoencoder: a novel method for estimating dynamic precipitation at high resolution, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-696, https://doi.org/10.5194/egusphere-egu22-696, 2022.

EGU22-722 | Presentations | ITS2.7/AS5.2 | Highlight

Climate-Invariant, Causally Consistent Neural Networks as Robust Emulators of Subgrid Processes across Climates 

Tom Beucler, Fernando Iglesias-Suarez, Veronika Eyring, Michael Pritchard, Jakob Runge, and Pierre Gentine

Data-driven algorithms, in particular neural networks, can emulate the effects of unresolved processes in coarse-resolution Earth system models (ESMs) if trained on high-resolution simulation or observational data. However, they can (1) make large generalization errors when evaluated in conditions they were not trained on; and (2) trigger instabilities when coupled back to ESMs.

First, we propose to physically rescale the inputs and outputs of neural networks to help them generalize to unseen climates. Applied to the offline parameterization of subgrid-scale thermodynamics (convection and radiation) in three distinct climate models, we show that rescaled or "climate-invariant" neural networks make accurate predictions in test climates that are 8K warmer than their training climates. Second, we propose to eliminate spurious causal relations between inputs and outputs by using a recently developed causal discovery framework (PCMCI). For each output, we run PCMCI on the inputs time series to identify the reduced set of inputs that have the strongest causal relationship with the output. Preliminary results show that we can reach similar levels of accuracy by training one neural network per output with the reduced set of inputs; stability implications when coupled back to the ESM are explored.

Overall, our results suggest that explicitly incorporating physical knowledge into data-driven models of Earth system processes may improve their ability to generalize across climate regimes, while quantifying causal associations to select the optimal set of inputs may improve their consistency and stability.

How to cite: Beucler, T., Iglesias-Suarez, F., Eyring, V., Pritchard, M., Runge, J., and Gentine, P.: Climate-Invariant, Causally Consistent Neural Networks as Robust Emulators of Subgrid Processes across Climates, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-722, https://doi.org/10.5194/egusphere-egu22-722, 2022.

EGU22-1065 | Presentations | ITS2.7/AS5.2 | Highlight

Skilful US Soy-yield forecasts at pre-sowing lead-times 

Sem Vijverberg, Dim Coumou, and Raed Hamed

Soy harvest failure events can severely impact farmers, insurance companies and raise global prices. Reliable seasonal forecasts of mis-harvests would allow stakeholders to prepare and take appropriate early action. However, especially for farmers, the reliability and lead-time of current prediction systems provide insufficient information to justify for within-season adaptation measures. Recent innovations increased our ability to generate reliable statistical seasonal forecasts. Here, we combine these innovations to predict the 1-3 poor soy harvest years in eastern US. We first use a clustering algorithm to spatially aggregate crop producing regions within the eastern US that are particularly sensitive to hot-dry weather conditions. Next, we use observational climate variables (sea surface temperature (SST) and soil moisture) to extract precursor timeseries at multiple lags. This allows the machine learning model to learn the low-frequency evolution, which carries important information for predictability. A selection based on causal inference allows for physically interpretable precursors. We show that the robust selected predictors are associated with the evolution of the horseshoe Pacific SST pattern, in line with previous research. We use the state of the horseshoe Pacific to identify years with enhanced predictability. We achieve very high forecast skill of poor harvests events, even 3 months prior to sowing, using a strict one-step-ahead train-test splitting. Over the last 25 years, 90% of the predicted events in February were correct. When operational, this forecast would enable farmers (and insurance/trading companies) to make informed decisions on adaption measures, e.g., selecting more drought-resistant cultivars, invest in insurance, change planting management.

How to cite: Vijverberg, S., Coumou, D., and Hamed, R.: Skilful US Soy-yield forecasts at pre-sowing lead-times, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1065, https://doi.org/10.5194/egusphere-egu22-1065, 2022.

EGU22-1835 | Presentations | ITS2.7/AS5.2

Using Deep Learning for a High-Precision Analysis of Atmospheric Rivers in a High-Resolution Large Ensemble Climate Dataset 

Timothy Higgins, Aneesh Subramanian, Andre Graubner, Lukas Kapp-Schwoerer, Karthik Kashinath, Sol Kim, Peter Watson, Will Chapman, and Luca Delle Monache

Atmospheric rivers (ARs) are elongated corridors of water vapor in the lower Troposphere that cause extreme precipitation over many coastal regions around the globe. They play a vital role in the water cycle in the western US, fueling most extreme west coast precipitation and sometimes accounting for more than 50% of total annual west coast precipitation (Gershunov et al. 2017). Severe ARs are associated with extreme flooding and damages while weak ARs are typically more beneficial to our society as they bring much needed drought relief.

Precipitation is particularly difficult to predict in traditional climate models.  Predicting water vapor is more reliable (Lavers et al. 2016), allowing IVT (integrated vapor transport) and ARs to be a favorable method for understanding changing patterns in precipitation (Johnson et al. 2009).  There are a variety of different algorithms used to track ARs due to their relatively diverse definitions (Shields et al. 2018). The Atmospheric River Tracking Intercomparison Project (ARTMIP) organizes and provides information on all of the widely accepted algorithms that exist. Nearly all of the algorithms included in ARTMIP rely on absolute and relative numerical thresholds, which can often be computationally expensive and have a large memory footprint. This can be particularly problematic in large climate datasets. The vast majority of algorithms also heavily factor in wind velocity at multiple vertical levels to track ARs, which is especially difficult to store in climate models and is typically not output at the temporal resolution that ARs occur.

A recent alternative way of tracking ARs is through the use of machine learning. There are a variety of neural networks that are commonly applied towards identifying objects in cityscapes via semantic segmentation. The first of these neural networks that was applied towards detecting ARs is DeepLabv3+ (Prabhat et al. 2020). DeepLabv3+ is a state of the art model that demonstrates one of the highest performances of any present day neural network when tasked with the objective of identifying objects in cityscapes (Wu et al. 2019). We employ a light-weight convolutional neural network adapted from CGNet (Kapp-Schwoerer et al. 2020) to efficiently track these severe events without using wind velocity at all vertical levels as a predictor variable. When applied to cityscapes, CGNet's greatest advantage is its performance relative to its memory footprint (Wu et al. 2019). It has two orders of magnitude less parameters than DeepLabv3+ and is computationally less expensive. This can be especially useful when identifying ARs in large datasets. Convolutional neural networks have not been used to track ARs in a regional domain. This will also be the first study to demonstrate the performance of this neural network on a regional domain by providing an objective analysis of its consistency with eight different ARTMIP algorithms.

How to cite: Higgins, T., Subramanian, A., Graubner, A., Kapp-Schwoerer, L., Kashinath, K., Kim, S., Watson, P., Chapman, W., and Delle Monache, L.: Using Deep Learning for a High-Precision Analysis of Atmospheric Rivers in a High-Resolution Large Ensemble Climate Dataset, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1835, https://doi.org/10.5194/egusphere-egu22-1835, 2022.

EGU22-2012 | Presentations | ITS2.7/AS5.2

Gap filling in air temperature series by matrix completion methods 

Benoît Loucheur, Pierre-Antoine Absil, and Michel Journée

Quality control of meteorological data is an important part of atmospheric analysis and prediction, as missing or erroneous data can have a negative impact on the accuracy of these environmental products.

In Belgium, the Royal Meteorological Institute (RMI) is the national meteorological service that provide weather and climate services based on observations and scientific research. RMI collects and archives meteorological observations in Belgium since the 19th century. Currently, air temperature is monitored in Belgium in about 30 synoptic automatic weather stations (AWS) as well as in 110 manual climatological stations. In the latter stations, a volunteer observer records every morning at 8 o'clock the daily extreme air temperatures. All observations are routinely checked for errors, inconsistencies and missing values by the RMI staff. Misleading data are corrected and gaps are filled by estimations. This quality control tasks require a lot of human intervention. With the forthcoming deployment of low-cost weather stations and the subsequent increase in the volume of data to verify, the process of data quality control and completion should become as automated as much as possible.

In this work, the quality control process is fully automated by using mathematical tools. We present low-rank matrix completion methods (LRMC) that we used to solve the problem of completing missing data in daily minimum and maximum temperature series. We used a machine learning technique called Monte Carlo cross-validation to train our algorithms and then test them in a real case.

Among the matrix completion methods, some are regularised by graphs. In our case, it is then possible to represent the spatial and temporal component via graphs. By manipulating the construction of these graphs, we hope to improve the completion results. We were then able to compare our methods with what is done in the state of the art, such as the inverse distance weighting (IDW) method.

All our experiments were performed with a dataset provided by the RMI, including daily minimum and maximum temperature measurements from 100 stations over the period 2005-2019.

How to cite: Loucheur, B., Absil, P.-A., and Journée, M.: Gap filling in air temperature series by matrix completion methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2012, https://doi.org/10.5194/egusphere-egu22-2012, 2022.

EGU22-2248 | Presentations | ITS2.7/AS5.2

Exploring flooding mechanisms and their trends in Europe through explainable AI 

Shijie Jiang, Yi Zheng, and Jakob Zscheischler

Understanding the mechanisms causing river flooding and their trends is important to interpret past flood changes and make better predictions of future flood conditions. However,  there is still a lack of quantitative assessment of trends in flooding mechanisms based on observations. Recent years have witnessed the increasing prevalence of machine learning in hydrological modeling and its predictive power has been demonstrated in numerous studies. Machine learning makes hydrological predictions by recognizing generalizable relationships between inputs and outputs, which, if properly interpreted, may provide us further scientific insights into hydrological processes. In this study, we propose a new method using interpretive machine learning to identify flooding mechanisms based on the predictive relationship between precipitation and temperature and flow peaks. Applying this method to more than a thousand catchments in Europe reveals three primary input-output patterns within flow predictions, which can be associated with three catchment-wide flooding mechanisms: extreme precipitation, soil moisture excess, and snowmelt. The results indicate that approximately one-third of the studied catchments are controlled by a combination of the above mechanisms, while others are mostly dominated by one single mechanism. Although no significant shifts from one dominant mechanism to another are observed for the catchments over the past seven decades overall, some catchments with single mechanisms have become dominated by mixed mechanisms and vice versa. In particular, snowmelt-induced floods have decreased significantly in general, whereas rainfall has become more dominant in causing floods, and their effects on flooding seasonality and magnitude are crucial. ​Overall, this study provides a new perspective for understanding climatic extremes and demonstrates the prospect of artificial intelligence(AI)-assisted scientific discovery in the future.

How to cite: Jiang, S., Zheng, Y., and Zscheischler, J.: Exploring flooding mechanisms and their trends in Europe through explainable AI, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2248, https://doi.org/10.5194/egusphere-egu22-2248, 2022.

EGU22-2391 | Presentations | ITS2.7/AS5.2

Exploring cirrus cloud microphysical properties using explainable machine learning 

Kai Jeggle, David Neubauer, Gustau Camps-Valls, Hanin Binder, Michael Sprenger, and Ulrike Lohmann

Cirrus cloud microphysics and their interactions with aerosols remain one of the largest uncertainties in global climate models and climate change projections. The uncertainty originates from the high spatio-temporal variability and their non-linear dependence on meteorological drivers like temperature, updraft velocities, and aerosol environment. We combine ten years of CALIPSO/CloudSat satellite observations of cirrus clouds with ERA5 and MERRA-2 reanalysis data of meteorological and aerosol variables to create a spatial data cube. Lagrangian back trajectories are calculated for each cirrus cloud observation to add a temporal dimension to the data cube. We then train a gradient boosted tree machine learning (ML) model to predict vertically resolved cirrus cloud microphysical properties (i.e. observed ice crystal number concentration and ice water content). The explainable machine learning method of SHAP values is applied to assess the impact of individual cirrus drivers as well as combinations of drivers on cirrus cloud microphysical properties in varying meteorological conditions. In addition, we analyze how the impact of the drivers differs regionally, vertically, and temporally.

We find that the tree-based ML model is able to create a good mapping between cirrus drivers and microphysical properties (R² ~0.75) and the SHAP value analysis provides detailed insights in how different drivers impact the prediction of the microphysical cirrus cloud properties. These findings can be used to improve global climate model parameterizations of cirrus cloud formation in future works. Our approach is a good example for exploring unsolved scientific questions using explainable machine learning and feeding back insights to the domain science.

How to cite: Jeggle, K., Neubauer, D., Camps-Valls, G., Binder, H., Sprenger, M., and Lohmann, U.: Exploring cirrus cloud microphysical properties using explainable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2391, https://doi.org/10.5194/egusphere-egu22-2391, 2022.

Global circulation models (GCMs) form the basis of a vast portion of earth system research and inform our climate policy. However, our climate system is complex and connected across scales. To simulate it, we must use parameterisations. These parameterisations, which are present in all models, can have a detectable influence on the GCM outputs.

GCMs are improving, but we need to use their current output to optimally estimate the risks of extreme weather. Therefore, we must debias GCM outputs with respect to observations. Current debiasing methods cannot correct both spatial correlations and cross-variable correlations. This limitation means current methods can produce physically implausible weather events - even when the single-location, single-variable distributions match the observations. This limitation is very important for extreme event research. Compound events like heat and drought, which drastically increase wildfire risk, and spatially co-occurring events like multiple bread-basket failures, are not well corrected by these current methods.

We propose using unsupervised image-to-image translations networks to perform bias correction of GCMs. These neural network architectures are used to translate (perform bias correction) between different image domains. For example, they have been used to translate computer-generated city scenes into real-world photos, which requires spatial and cross-variable correlations to be translated. Crucially, these networks learn to translate between image domains without requiring corresponding pairs of images. Such pairs cannot be generated between climate simulations and observations due to the inherent chaos of weather.

In this work, we use these networks to bias correct historical recreation simulations from the HadGEM3-A-N216 atmosphere-only GCM with respect to the ERA5 reanalysis dataset. This GCM has a known bias in simulating the South Asian monsoon, and so we focus on this region. We show the ability of neural networks to correct this bias, and show how combining the neural network with classical techniques produces a better bias correction than either method alone. 

How to cite: Fulton, J. and Clarke, B.: Correcting biases in climate simulations using unsupervised image-to-image-translation networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2988, https://doi.org/10.5194/egusphere-egu22-2988, 2022.

EGU22-3009 | Presentations | ITS2.7/AS5.2

Application of Machine Learning for spatio-temporal mapping of the air temperature in Warsaw 

Amirhossein Hassani, Núria Castell, and Philipp Schneider

Mapping the spatio-temporal distribution of near-surface urban air temperature is crucial to our understanding of climate-sensitive epidemiology, indoor-outdoor thermal comfort, urban biodiversity, and interactive impacts of climate change and urbanity. Urban-scale decision-making in face of future climatic uncertainties requires detailed information on near-surface air temperature at high spatio-temporal resolutions. However, reaching such fine resolutions cannot be currently realised by traditional observation networks, or even by regional or global climate models (Hamdi et al. 2020). Given the complexity of the processes affecting air temperature at the urban scale to the regional scale, here we apply Machine Learning (ML) algorithms, in particular, XGBoost gradient boosting method to build predictive models of near surface air temperature (Ta at 2-meter height). These predictive models establish data-driven relations between crowd-sourced measured Ta (data produced by citizens’ sensors) and a set of spatial and spatio-temporal predictors, primarily derived from Earth Observation satellite data including Modis Aqua/Landsat 8 Land Surface Temperature (LST), Modis Terra vegetative indices, and Sentinel-2 water vapour product. We use our models to predict sub-daily (at Modis Aqua satellite passing times) variation in urban scale Ta in city of Warsaw, Poland at spatial resolution of 1 km for the months July-September and the years 2016 to 2021. A 10-fold cross-validation of the developed models shows a root mean square error between 0.97 and 1.02 °C and a coefficient of determination between 0.96 and 0.98, which are satisfactory according to the literature (Taheri-Shahraiyni and Sodoudi 2017). The resulting maps allow us to identify regions of Warsaw that are vulnerable to heat stress. The strength of the method used here is that it can be easily replicated in other EU cities to achieve high resolution maps due to the accessibility and open-sourced nature of the training and predictor data. Contingent on data availability, the predictive framework developed also can be used for monitoring and downscaling of other urban governing climatic parameters such as relative humidity in the context of future climate uncertainties.

Hamdi, R., H. Kusaka, Q.-V. Doan, P. Cai, H. He, G. Luo, W. Kuang, S. Caluwaerts, F. Duchêne, B. J. E. S. Van Schaeybroek and Environment (2020). "The state-of-the-art of urban climate change modeling and observations." 1-16.

Taheri-Shahraiyni, H. and S. J. T. S. Sodoudi (2017). "High-resolution air temperature mapping in urban areas: A review on different modelling techniques."  21(6 Part A): 2267-2286.

How to cite: Hassani, A., Castell, N., and Schneider, P.: Application of Machine Learning for spatio-temporal mapping of the air temperature in Warsaw, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3009, https://doi.org/10.5194/egusphere-egu22-3009, 2022.

The interdisciplinary research project "BayTreeNet" investigates the reactions of forest ecosystems to current climate dynamics. In the mid-latitudes, local climatic phenomena often show a strong dependence on the large-scale climate dynamics, the weather types (WT), which significantly determine the climate of a region through frequency and intensity. In the topographically diverse region of Bavaria, different WT show various weather conditions at different locations.

The meaning of every WT is explained for the different forest regions in Bavaria and the results of the climate dynamics sub-project provide the physical basis for the "BayTreeNet" project. Subsequently, climate-growth relationships are established in the dendroecology sub-project to investigate the response of forests to individual WT at different forest sites. Complementary steps allow interpretation of results for the past (20th century) and projection into the future (21st century). One hypothesis to be investigated is that forest sites in Bavaria are affected by a significant influence of climate change in the 21st century and the associated change in WT.

The automated classification of large-scale weather patterns is presented by Self-Organizing-Maps (SOM) developed by Kohonen, which enables visualization and reduction of high-dimensional data. The poster presents the evaluation and selection of an appropriate SOM-setting and its first results. Besides, it is planned to show first analyses of the environmental conditions of the different WT and how these are represented in global climate models (GCMs) in the past and future.

How to cite: Wehrmann, S. and Mölg, T.: Classifying weather types in Europe by Self-Organizing-Maps (SOM) with regard to GCM-based future projections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3105, https://doi.org/10.5194/egusphere-egu22-3105, 2022.

EGU22-3482 | Presentations | ITS2.7/AS5.2

Public perception assessment on climate change and natural disaster influence using social media big-data: A case study of USA 

SungKu Heo, Pouya Ifaei, Mohammad Moosazadeh, and ChangKyoo Yoo

Climate change is a global crisis to the world which influences the human race and society's development. Threatens of climate change have become increasingly recognized to the public and government in both environments, society, and economy across the globe; because the consequence of climate change is not only shown up as the increasing of global temperature, also expressed in an intensive natural hazard, such as floods, droughts, wildfires, and hurricanes. For the sustainability development in the globe, it is crucial to provide a response to mitigating climate change through the government’s policy and decision-making; however, the public's engagement in the actions towards the critical environmental crisis still needs to be largely promoted. Analyzing the relationship between the public awareness of climate change and natural disasters is an essential aspect in climate change mitigation and policymaking. In this study, based on the abundance of the text message in social media, especially Twitter, the public understanding and discussions upon climate change from the surrounding environment was recognized and analyzed through the human as the sensor which receiving information of climate change. Twitter content analysis and filed data impact analysis were conducted; text mining algorithms are implemented in the Twitter big-data information to find the similarity based on a cosine similarity score (CSS) between the climate change corpus and the natural events corpora. Then, the factors of nature disaster influence were predicted utilizing a multiple linear regression model and climate change tweets dataset. This research shows that the public is more pretend to link the natural events with climate change when they tweeting when serious natural disasters happened. The developed regression model indicated that natural events caused by the consequence of climate change influenced the people’s social media activity through messages on Twitter with having the awareness of climate change. From this study, the results indicated that the public experience of natural events including intensive disasters can lead them to link the climate change with the natural events easily; when compared with the people who rarely experience natural events.

Acknowledgment

This research was supported by the project (NRF-2021R1A2C2007838) through the National Research Foundation of Korea (NRF) and the Korea Ministry of Environment (MOE) as Graduate school specialized in Climate Change.

How to cite: Heo, S., Ifaei, P., Moosazadeh, M., and Yoo, C.: Public perception assessment on climate change and natural disaster influence using social media big-data: A case study of USA, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3482, https://doi.org/10.5194/egusphere-egu22-3482, 2022.

EGU22-4431 | Presentations | ITS2.7/AS5.2

Identification of Global Drivers of Indian Summer Monsoon using Causal Inference and Interpretable AI 

Deepayan Chakraborty, Adway Mitra, Bhupendranath Goswami, and Pv Rajesh

Indian Summer Monsoon Rainfall (ISMR) is a complex phenomenon that depends on several climatic phenomena at different parts of the word through teleconnections. Each season is characterized by extended periods of wet and dry spells (which may cause floods or droughts) which contribute to intra-seasonal variability. Tropical and extra-tropical drivers jointly influence the intra-seasonal variability. Although El Nino and Southern Oscillation (ENSO) is known to be a driver of ISMR, researchers have also found its relation with Indian Ocean Dipole (IOD), North Atlantic Oscillations (NAO), Atlantic Multi-decadal Oscillation (AMO). In this work, we use ideas from Causality Theory and Explainable Machine Learning to quantify the influence of different climatic phenomena on the intraseasonal variation of ISMR.

To identify such causal relations, we applied two statistically sound causal inference approaches, i.e., PCMCI+ Algorithm (Conditional Independence based) and Granger Causal test (Regression-based).  For the Granger causality test, we have examined separately for both linear and non-linear regression. In case of PCMCI+, conditional independence tests were used between pairs of variables at different "lag periods". It is worth pointing out that, till now “causality” is not properly quantified in the Climate Science community and only linear correlations are used as a basis to identify relationships like ENSO-ISMR and AMO-ISMR. We performed experiments on mean monthly rainfall anomaly data (during the monsoon months of June-September over India) along with six probable drivers (ENSO, AMO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Nino, and Indian Ocean Dipole) for May, June, July, August, September months during the period 1861-2016. While the two approaches produced some contradictions, they also produced a common conclusion that ENSO and AMO are equally important and independent drivers of ISMR. 

Additionally, we have studied the contribution of the drivers on annual extremes of ISMR (years of deficient and excess rainfall) using Shapley values based on the concept of Game Theory to quantify the contributions of different predictors in a model. In this work, we train a XGBoost model to predict the ISMR anomaly from any values of the predictor variables. The experiment is carried out in two approaches. One approach involves analyzing the contribution of each driver for each of the ISMR months of any year on the mean seasonal rainfall anomaly of that year. Another approach focuses on the contribution of the seasonal mean value of each driver on the same. In both approaches, we analyze the distribution of each driver’s Shapley values for excess and deficient monsoon years for contrast. We find that while ENSO is indeed the dominant driving factor for a majority of these years, AMO is another major factor which frequently contributes to such deficiencies, while Atlantic Nino and Indian Ocean Dipole too sometimes contribute. On the other hand, Indian Ocean Dipole seems to be a major contributor for several years of excess rainfall. As future work, we plan to carry out a robustness analysis of these results, and also examine the drivers of regional extremes.

How to cite: Chakraborty, D., Mitra, A., Goswami, B., and Rajesh, P.: Identification of Global Drivers of Indian Summer Monsoon using Causal Inference and Interpretable AI, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4431, https://doi.org/10.5194/egusphere-egu22-4431, 2022.

EGU22-4534 | Presentations | ITS2.7/AS5.2

Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean 

Joana Roussillon, Jean Littaye, Ronan Fablet, Lucas Drumetz, Thomas Gorgues, and Elodie Martinez

Phytoplankton plays a key role in the carbon cycle and fuels marine food webs. Its seasonal and interannual variations are relatively well-known at global scale thanks to satellite ocean color observations that have been continuously acquired since 1997. However, the satellite-derived chlorophyll-a concentrations (Chl-a, a proxy of phytoplankton biomass) time series are still too short to investigate phytoplankton biomass low-frequency variability. Machine learning models such as support vector regression (SVR) or multi-layer perceptron (MLP) have recently proven to be an alternative approach to mechanistic ones to reconstruct Chl-a past signals (including periods before the satellite era) from physical predictors, but they remain unsatisfactory. In particular, the relationships between phytoplankton and its physical surrounding environment are not homogeneous in space, and training such models over the entire globe does not allow them to capture these regional specificities. Moreover, if the global ocean is commonly partitioned into biogeochemical provinces into which phytoplankton growth is supposed to be governed by similar processes, their time-evolving nature makes it difficult to impose a priori spatial constraints to restrict the learning phase on specific areas. Here, we propose to overcome this limitation by introducing spatial multi-modalities into a convolutional neural network (CNN). The latter can learn with no particular supervision several spatially weighted modes of variability. Each of them is associated with a CNN submodel trained in parallel, standing for a mode-specific response of phytoplankton biomass to the physical forcing. Beyond improving performance reconstruction, we will show that the learned spatial modes appear physically consistent and may help to get new insights into physical-biogeochemical processes controlling phytoplankton repartition at global scale.

How to cite: Roussillon, J., Littaye, J., Fablet, R., Drumetz, L., Gorgues, T., and Martinez, E.: Spatial multi-modality as a way to improve both performance and interpretability of deep learning models to reconstruct phytoplankton time-series in the global ocean, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4534, https://doi.org/10.5194/egusphere-egu22-4534, 2022.

EGU22-4584 | Presentations | ITS2.7/AS5.2

Super-Resolution based Deep Downscaling of Precipitation 

Sumanta Chandra Mishra Sharma and Adway Mitra

Downscaling is widely used to improve spatial resolution of meteorological variables. Broadly there are two classes of techniques used for downscaling i.e. dynamical downscaling and statistical downscaling. Dynamical downscaling depends on the boundary conditions of coarse resolution global models like General Circulation Models (GCMs) for its operation whereas the statistical model tries to interpret the statistical relationship between the high-resolution and low-resolution data (Kumar et. al. 2021). With the rapid development of deep learning techniques in recent years, deep learning based super-resolution (SR) models have been designed for image processing and computer vision, for increasing the resolution of a given image. But many researchers from other fields have also adapted these techniques and achieved state-of-the-art performance in various domains. To the best of our knowledge, only a few works exist that have used the super-resolution methods in climate domain, for deep downscaling of precipitation data.

These super-resolution approaches mostly use convolutional neural networks (CNN) to accomplish their task. In CNN when we increase the depth of the model then there is a chance of information loss and error propagation (Vandal et.al.2017). To reduce this information loss, we have introduced residual-based deep downscaling models. These models have multiple residual blocks and skip connections between similar types of convolutional layers. The long skip connections in the model helps to reduce information loss in the network. These models take as input, data that is pre-upsampled by linear interpolation, and then improve the estimates of the pixel values.

In our experiments, we have focused on downscaling of rainfall over Indian landmass (for Indian summer monsoon rainfall) and for a region in the USA spanning the southeast CONUS and parts of its neighboring states that are present between the longitude 700 W to 1000 W and latitude 240 N to 400 N. The precipitation data for this task is collected from the India Meteorological Department (IMD), Pune, India, and NOAA Physical Science Laboratory. We have examined our model's predictive behavior and compared it with the existing super-resolution models like SRCNN and DeepSD, which have been earlier used for precipitation downscaling. In the DeepSD model, we have used the GTOPO30 land elevation data provided by USGS along with the precipitation data as input. All these models are trained and tested in both the geographical regions separately and it is found that the proposed model performs better than the existing models on multiple accuracy measures like PSNR, Correlation Coefficient, etc. for the specific region and scaling factor.

How to cite: Mishra Sharma, S. C. and Mitra, A.: Super-Resolution based Deep Downscaling of Precipitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4584, https://doi.org/10.5194/egusphere-egu22-4584, 2022.

EGU22-4853 | Presentations | ITS2.7/AS5.2

Can cloud properties provide information on surface wind variations using deep learning? 

Sebastiaan Jamaer, Jérôme Neirynck, and Nicole van Lipzig

Recent studies have shown that the increasing sizes of offshore wind farms can cause a reduced energy production through mesoscale interactions with the atmosphere. Therefore, accurate nowcasting of the energy yields of large offshore wind farms depend on accurate predictions of the large synoptic weather systems as well as accurate predictions of the smaller mesoscale weather systems. In general, global or regional forecasting models are very well suited to predict synoptic-scale weather systems. However, satellite or radar data can support the nowcasting of shorter, smaller-scale systems. 

In this work, a first step towards nowcasting of the mesoscale wind using satellite images has been taken, namely the coupling of the mesoscale wind component to cloud properties that are available from satellite images using a deep learning framework. To achieve this, a high-resolution regional atmospheric model (COSMO-CLM) was used to generate one year of high resolution cloud en hub-height wind data. From this wind data the mesoscale component was filtered out and used as target images for the deep learning model. The input images of the model were several cloud-related fields from the atmospheric model. The model itself was a Deep Convolutional Neural Network (a U-Net) which was trained to minimize the mean squared error. 

This analysis indicates that cloud information can be used to extract information about the mesoscale weather systems and could be used for nowcasting by using the trained U-Net as a basis for a temporal deep learning model. However, future validation with real-world data is still needed to determine the added value of such an approach.

How to cite: Jamaer, S., Neirynck, J., and van Lipzig, N.: Can cloud properties provide information on surface wind variations using deep learning?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4853, https://doi.org/10.5194/egusphere-egu22-4853, 2022.

EGU22-5058 | Presentations | ITS2.7/AS5.2

Can satellite images provide supervision for cloud systems characterization? 

Dwaipayan Chatterjee, Hartwig Deneke, and Susanne Crewell

With ever-increasing resolution, geostationary satellites are able to reveal the complex structure and organization of clouds. How cloud systems organize is important for the local climate and strongly connects to the Earth's response to warming through cloud system feedback.

Motivated by recent developments in computer vision for pattern analysis of uncurated images, our work aims to understand the organization of cloud systems based on high-resolution cloud optical depth images. We are exploiting the self-learning capability of a deep neural network to classify satellite images into different subgroups based on the distribution pattern of the cloud systems.

Unlike most studies, our neural network is trained over the central European domain, which is characterized by strong land surface type and topography variations. The satellite data is post-processed and retrieved at a higher spatio-temporal resolution (2 km, 5 min), enhanced by 66% compared to the current standard, equivalent to the future Meteosat third-generation satellite, which will be launched soon.

We show how recent advances in deep learning networks are used to understand clouds' physical properties in temporal and spatial scales. In a purely data-driven approach, we avoid the noise and bias obtained from human labeling, and with proper scalable techniques, it takes 0.86 ms and 2.13 ms to label an image at two different spatial configurations. We demonstrate explainable artificial intelligence (XAI), which helps gain trust for the neural network's performance.

To generalize the results, a thorough quantified evaluation is done on two spatial domains and two-pixel configurations (128x128, 64x64). We examine the uncertainty associated with distinct machine-detected cloud-pattern categories. For this, the learned features of the satellite images are extracted from the trained neural network and fed to an independent hierarchical - agglomerative algorithm. Therefore the work also explores the uncertainties associated with the automatic machine-detected patterns and how they vary with different cloud classification types.

How to cite: Chatterjee, D., Deneke, H., and Crewell, S.: Can satellite images provide supervision for cloud systems characterization?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5058, https://doi.org/10.5194/egusphere-egu22-5058, 2022.

Extreme weather events, such as droughts, floods or heatwaves, severely impact agricultural yield. However, crop yield failure may also be caused by the temporal or multivariate compounding of more moderate weather events. An example of such an occurrence is the phenomenon of 'false spring', where the combined effects of a warm interval in late winter followed by a period of freezing temperatures can result in severe damage to vegetation. Alternatively, multiple weather events may impact crops simultaneously, as with compound hot and dry weather conditions.

Machine learning techniques are able to learn highly complex and nonlinear relationships between predictors. Such methods have previously been used to explore the influence of monthly- or seasonally-aggregated weather data as well as predefined extreme event indicators on crop yield. However, as crop yield may be impacted by climatic variables at different temporal scales, interpretable machine learning methods that can extract relevant meteorological features from higher-resolution time series data are desirable.

In this study we test the ability of adaptations of random forest models to identify compound meteorological drivers of crop failure from simulated data. In particular, adaptations of random forest models capable of ingesting daily multivariate time series data and spatial information are used. First, we train models to extract useful features from daily climatic data and predict crop yield failure probabilities. Second, we use permutation feature importances and sequential feature selection to investigate weather events and time periods identified by the models as most relevant for crop yield failure prediction. Finally, we explore the interactions learned by the models between these selected meteorological drivers, and compare the outcomes for several global crop models. Ultimately, our goal is to present a robust and highly interpretable machine learning method that can identify critical weather conditions from datasets with high temporal and spatial resolution, and is therefore able to identify drivers of crop failure using relatively few years of data.

How to cite: Sweet, L. and Zscheischler, J.: Using interpretable machine learning to identify compound meteorological drivers of crop yield failure, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5464, https://doi.org/10.5194/egusphere-egu22-5464, 2022.

EGU22-5756 | Presentations | ITS2.7/AS5.2

The influence of meteorological parameters on wind speed extreme events:  A causal inference approach 

Katerina Hlavackova-Schindler (Schindlerova), Andreas Fuchs, Claudia Plant, Irene Schicker, and Rosmarie DeWit

Based on the ERA5  data of hourly  meteorological parameters [1], we investigate temporal effects of  12 meteorological parameters on  the extreme values occurring in  wind speed.  We approach the problem by using the Granger causal inference, namely by the heterogeneous graphical Granger model (HGGM) [2]. In contrary to the classical Granger model proposed for causal inference among Gaussian processes, the HGGM detects causal relations among time series with distributions from the exponential family, which includes a wider class of common distributions. In previous synthetic experiments, HGGM combined with the genetic algorithm search based on the minimum message length principle has been shown superior in precision over the baseline causal methods [2].  We investigate various experimental settings of all 12 parameters with respect to the wind extremes in various time intervals. Moreover, we compare the influence of various data preprocessing methods and evaluate the interpretability of the discovered causal connections based on meteorological knowledge.

[1] https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels?tab=overview

[2] Behzadi, S, Hlaváčková-Schindler, K., Plant, C. (2019) Granger causality for heterogeneous processes, In: Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, pp. 463-475.

[3] Hlaváčková-Schindler, K., Plant, C. (2020) Heterogeneous graphical Granger causality by minimum message length, Entropy, 22(1400). pp. 1-21 ISSN 1099-4300 MDPI (2020).

How to cite: Hlavackova-Schindler (Schindlerova), K., Fuchs, A., Plant, C., Schicker, I., and DeWit, R.: The influence of meteorological parameters on wind speed extreme events:  A causal inference approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5756, https://doi.org/10.5194/egusphere-egu22-5756, 2022.

EGU22-6093 | Presentations | ITS2.7/AS5.2

Machine learning to quantify cloud responses to aerosols from satellite data 

Jessenia Gonzalez, Odran Sourdeval, Gustau Camps-Valls, and Johannes Quaas

The Earth's radiation budget may be altered by changes in atmospheric composition or land use. This is called radiative forcing. Among the human-generated influences in radiative forcing, aerosol-cloud interactions are the least understood. A way to quantify a key uncertainty in this regard, the adjustment of cloud liquid water path (LWP), is by the ratio (sensitivity) of LWP to changes in cloud droplet number concentration (Nd). A key problem in quantifying this sensitivity from large-scale observations is that these two quantities are not retrieved by operational satellite products and are subject to large uncertainties. 

In this work, we use machine learning techniques to show that inferring LWP and Nd directly from satellite observation data may yield a better understanding of this relationship without using retrievals, which may lead to large and systematic uncertainties. In particular, we use supervised learning on the basis of available high-resolution ICON-LEM (ICOsahedral Non-hydrostatic Large Eddy Model) simulations from the HD(CP)² project (High Definition Clouds and Precipitation for advancing Climate Prediction) and forward-simulated radiances obtained from the radiative transfer modeling (RTTOV, Radiative Transfer for TOVS) which uses MODIS (Moderate Resolution Imaging Spectroradiometer) data as a reference. Usually, only two channels from the reflectance of MODIS can be used to estimate the LWP and Nd. However, having access to 36 bands allows us to exploit data and find other patterns to get these parameters directly from the observation space rather than from the retrievals. A machine learning model is used to create an emulator which approximates the Radiative Transfer Model, and another machine learning model to directly predict the sensitivity of LWP - Nd from the satellite observation data.

How to cite: Gonzalez, J., Sourdeval, O., Camps-Valls, G., and Quaas, J.: Machine learning to quantify cloud responses to aerosols from satellite data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6093, https://doi.org/10.5194/egusphere-egu22-6093, 2022.

Microclimate is a relatively recent concept in atmospheric sciences, which started drawing attention of engineers and climatologists after proliferation of the open thermal (infrared, middle- and near-infrared) remote sensing instruments and high-resolution emissivity datasets. Rarely mentioned in the context of global climate change reversing, efficient management of microclimates nevertheless can be considered as a possible solution. Their function is bi-directional; On one hand, they can perform as ‘buffers’ by smoothing out effects of the already altered global climate on people and ecosystems, whilst also acting as the structural contributors to perturbations in the higher layers of the atmosphere. 

In the most abstract terms, microclimates tend to manifest themselves via land surface temperature conditions, which in turn are highly sensitive to the underlying land cover and use decisions. Forests are considered as the most efficient terrestrial carbon sinks and climate regulators, and various forms, configurations and continuity of logging can substantially alter the patterns of local temperature fluxes, precipitation and ecosystems. In this study we propose a novel heteroskedastic machine learning method, which can attribute localised forest loss patches due to industrial mining activity and estimate the resulting change in dynamics of the surrounding microclimate(s). 

How to cite: Tkachenko, N. and Garcia Velez, L.: Global attribution of microclimate dynamics to industrial deforestation sites using thermal remote sensing and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6466, https://doi.org/10.5194/egusphere-egu22-6466, 2022.

EGU22-6543 | Presentations | ITS2.7/AS5.2

High-resolution hybrid spatiotemporal modeling of daily relative humidity across Germany for epidemiological research: a Random Forest approach 

Nikolaos Nikolaou, Laurens Bouwer, Mahyar Valizadeh, Marco Dallavalle, Kathrin Wolf, Massimo Stafoggia, Annette Peters, and Alexandra Schneider

Introduction: Relative humidity (RH) is a meteorological variable of great importance as it affects other climatic variables and plays a role in plant and animal life as well as in human comfort and well-being. However, the commonly used weather station observations are inefficient to represent the great spatiotemporal RH variability, leading to exposure misclassification and difficulties to assess local RH health effects. There is also a lack of high-resolution RH spatial datasets and no readily available methods for modeling humidity across space and time. To tackle these issues, we aimed to improve the spatiotemporal coverage of RH data in Germany, using remote sensing and machine learning (ML) modeling.

Methods: In this study, we estimated German-wide daily mean RH at 1km2 resolution over the period 2000-2020. We used several predictors from multiple sources, including DWD RH observations, Ta predictions as well as satellite-derived DEM, NDVI and the True Color band composition (bands 1, 4 and 3: red, green and blue). Our main predictor for estimating the daily mean RH was the daily mean Ta. We had already mapped daily mean Ta in 1km2 across Germany through a regression-based hybrid approach of two linear mixed models using land surface temperature. Additionally, a very important predictor was the date, capturing the day-to-day variation of the response-explanatory variables relationship. All these variables were included in a Random Forest (RF) model, applied for each year separately. We assessed the model’s accuracy via 10-fold cross-validation (CV). First internally, using station observations that were not used for the model training, and then externally in the Augsburg metropolitan area using the REKLIM monitoring network over the period 2015-2019.

Results: Regarding the internal validation, the 21-year overall mean CV-R2 was 0.76 and the CV-RMSE was 6.084%. For the model’s external performance, at the same day, we found CV-R2=0.75 and CV-RMSE=7.051% and for the 7-day average, CV-R2=0.81 and CV-RMSE=5.420%. Germany is characterized by high relative humidity values, having a 20-year average RH of 78.4%. Even if the annual country-wide averages were quite stable, ranging from 81.2% for 2001 to 75.3% for 2020, the spatial variability exceeded 15% annually on average. Generally, winter was the most humid period and especially December was the most humid month. Extended urban cores (e.g., from Stuttgart to Frankfurt) or individual cities as Munich were less humid than the surrounding rural areas. There are also specific spatial patterns for RH distribution, including mountains, rivers and coastlines. For instance, the Alps and the North Sea coast are areas with elevated RH.

Conclusion: Our results indicate that the applied hybrid RF model is suitable for estimating nationwide RH at high spatiotemporal resolution, achieving a strong performance with low errors. Our method contributes to an improved spatial estimation of RH and the output product will help us understand better the spatiotemporal patterns of RH in Germany. We also plan to apply other ML techniques and compare the findings. Finally, our dataset will be used for epidemiological analyses, but could also be used for other research questions.

How to cite: Nikolaou, N., Bouwer, L., Valizadeh, M., Dallavalle, M., Wolf, K., Stafoggia, M., Peters, A., and Schneider, A.: High-resolution hybrid spatiotemporal modeling of daily relative humidity across Germany for epidemiological research: a Random Forest approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6543, https://doi.org/10.5194/egusphere-egu22-6543, 2022.

EGU22-6958 | Presentations | ITS2.7/AS5.2

Causal Discovery in Ensembles of Climate Time Series 

Andreas Gerhardus and Jakob Runge

Understanding the cause and effect relationships that govern natural phenomena is central to the scientific inquiry. While being the gold standard for inferring causal relationships, there are many scenarios in which controlled experiments are not possible. This is for example the case for most aspects of Earth's complex climate system. Causal relationships then have to be learned from statistical dependencies in observational data, a task that is commonly referred to as (observational) causal discovery.

When applied to time series data for learning causal relationships in dynamical systems, methods for causal discovery face additional statistical challenges. This is so because, as licensed by an assumption of stationarity, samples are taken in a sliding window fashion and hence autocorrelated rather than iid. Moreover, strong autocorrelations also often occlude other relevant causal links. The recent PCMCI algorithm (Runge et al., 2019) and its variants PCMCI+ (Runge, 2020) and LPCMCI (Gerhardus and Runge, 2020) address and to some extent alleviate theses issues.

In this contribution we present the Ensemble-PCMCI method, an adaption of PCMCI (and its variants PCMCI+ and LPCMCI) to cases in which the data comprises several time series, i.e., measurements of several instances of the same underlying dynamical system. Samples can then be taken from these different time series instead of a in a sliding window fashion, thus avoiding the issue of autocorrelation and also allowing to relax the stationarity assumption. In particular, this opens the possibility to analyze temporal changes in the underlying causal mechanisms. A potential domain of application are ensemble forecasts.

Related references:
Jakob Runge et al. (2019). Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances 5 eaau4996.

Jakob Runge (2020). Discovering contemporaneous and lagged causal relations in autocorrelated nonlinear time series datasets. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI). Proceedings of Machine Learning Research 124 1388–1397. PMLR.

Andreas Gerhardus and Jakob Runge (2020). High-recall causal discovery for autocorrelated time series with latent confounders. In Advances in Neural Information Processing Systems 33 12615–12625. Curran Associates, Inc.

How to cite: Gerhardus, A. and Runge, J.: Causal Discovery in Ensembles of Climate Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6958, https://doi.org/10.5194/egusphere-egu22-6958, 2022.

EGU22-6998 | Presentations | ITS2.7/AS5.2

Inferring the Cloud Vertical Distribution from Geostationary Satellite Data 

Sarah Brüning, Holger Tost, and Stefan Niebler

Clouds and their radiative feedback mechanisms are of vital importance for the atmospheric cycle of the Earth regarding global weather today as well as climate changes in the future. Climate models and simulations are sensitive to the vertical distribution of clouds, emphasizing the need for broadly accessible fine resolution data. Although passive satellite sensors provide continuous cloud monitoring on a global scale, they miss the ability to infer physical properties below the cloud top. Active instruments like radar are particularly suitable for this task but lack an adequate spatio-temporal resolution. Here, recent advances in Deep-Learning models open up the possibility to transfer spatial information from a 2D towards a 3D perspective on a large-scale.

By an example period in 2017, this study aims to explore the feasibility and potential of neural networks to reconstruct the vertical distribution of volumetric radar data along a cloud’s column. For this purpose, the network has been tested on the Full Disk domain of a geostationary satellite with high spatio-temporal resolution data. Using raw satellite channels, spectral indices, and topographic data, we infer the 3D radar reflectivity from these physical predictors. First results demonstrate the network’s capability to reconstruct the cloud vertical distribution. Finally, the ultimate goal of interpolating the cloud column for the whole domain is supported by a considerably high accuracy in predicting the radar reflectivity. The resulting product can open up the opportunity to enhance climate models by an increased spatio-temporal resolution of 3D cloud structures.

How to cite: Brüning, S., Tost, H., and Niebler, S.: Inferring the Cloud Vertical Distribution from Geostationary Satellite Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6998, https://doi.org/10.5194/egusphere-egu22-6998, 2022.

EGU22-7011 | Presentations | ITS2.7/AS5.2

Unlocking the potential of ML for Earth and Environment researchers 

Tobias Weigel, Frauke Albrecht, Caroline Arnold, Danu Caus, Harsh Grover, and Andrey Vlasenko

This presentation reports on support done under the aegis of Helmholtz AI for a wide range of machine learning based solutions for research questions related to Earth and Environmental sciences. We will give insight into typical problem statements from Earth observation and Earth system modeling that are good candidates for experimentation with ML methods and report on our accumulated experience tackling such challenges with individual support projects. We address these projects in an agile, iterative manner and during the definition phase, we direct special attention towards assembling practically meaningful demonstrators within a couple of months. A recent focus of our work lies on tackling software engineering concerns for building ML-ESM hybrids.

Our implementation workflow covers stages from data exploration to model tuning. A project may often start with evaluating available data and deciding on basic feasibility, apparent limitations such as biases or a lack of labels, and splitting into training and test data. Setting up a data processing workflow to subselect and compile training data is often the next step, followed by setting up a model architecture. We have made good experience with automatic tooling to tune hyperparameters and test and optimize network architectures. In typical implementation projects, these stages may repeat many times to improve results and cover aspects such as errors due to confusing samples, incorporating domain model knowledge, testing alternative architectures and ML approaches, and dealing with memory limitations and performance optimization.

Over the past two years, we have supported Helmholtz-based researchers from many subdisciplines on making the best use of ML methods along with these steps. Example projects include wind speed regression on GNSS-R data, emulation of atmospheric chemistry modeling, Earth System model parameterizations with ML, marine litter detection, and rogue waves prediction. The poster presentation will highlight selected best practices across these projects. We are happy to share our experience as it may prove useful to applications in wider Earth System modeling. If you are interested in discussing your challenge with us, please feel free to chat with us.

How to cite: Weigel, T., Albrecht, F., Arnold, C., Caus, D., Grover, H., and Vlasenko, A.: Unlocking the potential of ML for Earth and Environment researchers, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7011, https://doi.org/10.5194/egusphere-egu22-7011, 2022.

EGU22-7034 | Presentations | ITS2.7/AS5.2

Developing a new emergent constraint through network analysis 

Lucile Ricard, Athanasios Nenes, Jakob Runge, and Fabrizio Falasca

Climate sensitivity expresses how average global temperature responds to an increase in greenhouse gas concentration. It is a key metric to assess climate change, and to formulate policy decisions, but its estimation from the Earth System Models (ESM) provides a wide range: between 2.5 and 4.0 K based on the sixth assessment report (AR6) of the Intergovernmental Panel on Climate Change (IPCC). To narrow down this spread, a number of observable metrics, called “emergent constraints” have been proposed, but often are based on relatively few parameters from a simulation – thought to express the “essence” of the climate simulation and its relationship with climate sensitivity. Many of the constraints to date however are model-dependent, therefore questionable in terms of their robustness.

We postulate that methods based on “holistic” consideration of the simulations and observations may provide more robust constraints; we also focus on Sea Surface Temperature (SST) ensembles as SST is a major driver of climate variability. To extract the essential patterns of SST variability, we use a knowledge discovery and network inference method, δ-Maps (Fountalis et al., 2016, Falasca et al, 2019), expanded to include a causal discovery algorithm (PCMCI) that relies on conditional independence testing, to capture the essential dynamics of the climate simulation on a functional graph and explore the true causal effects of the underlying dynamical system (Runge et al., 2019). The resulting networks are then quantitatively compared using network “metrics” that capture different aspects, including the regions of uniform behavior, how they alternate over time and the strength of association. These metrics are then compared between simulations, and observations and used as emergent constraints, called Causal Model Evaluation (CME).

We apply δ-Maps and CME to CMIP6 model SST outputs and demonstrate how the networks and related metrics can be used to assess the historical performance of CMIP models, and climate sensitivity. We start by comparing the CMIP6 simulations against CMIP5 models, by using the reanalysis dataset HadISST (Met Office Hadley Centre) as a proxy for observations. Each field is reduced to a network and then how similar they are with reanalysis SST. The CMIP6 historical networks are then compared against CMIP6 projected networks, build from the Shared Socio-Economic Pathway ssp245 (“Middle of the road”) scenario. Comparing past and future SST networks help us to evaluate the extent to which climate warming is encompassed in the change overlying dynamical system of our networks. A large distance from network build over the past period to network build over a future scenario could be tightly related to a large temperature response to an increase of greenhouse gas emission, that is the way we define climate sensitivity. We finally give a new estimation of the climate sensitivity with a weighting scheme approach, derived from a combination of its performance metrics.

How to cite: Ricard, L., Nenes, A., Runge, J., and Falasca, F.: Developing a new emergent constraint through network analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7034, https://doi.org/10.5194/egusphere-egu22-7034, 2022.

EGU22-7355 | Presentations | ITS2.7/AS5.2

Combining cloud properties and synoptic observations to predict cloud base height using Machine Learning 

Julien Lenhardt, Johannes Quaas, and Dino Sejdinovic

Cloud base height (CBH) is an important geometric parameter of a cloud and shapes its radiative properties. The CBH is also further of practical interest in the aviation community regarding pilot visibility and aircraft icing hazards. While the cloud-top height has been successfully derived from passive imaging radiometers on satellites during recent years, the derivation of the CBH remains a more difficult challenge with these same retrievals.

In our study we combine surface observations and passive satellite remote-sensing retrievals to create a database of CBH labels and cloud properties to ultimately train a machine learning model predicting CBH. The labels come from the global marine meteorological observations dataset (UK Met Office, 2006) which consists of near-global synoptic observations made on sea. This data set provides information about CBH, cloud type, cloud cover and other meteorological surface quantities with CBH being the main interest here. The features based upon which the machine learning model is trained consist in different cloud-top and cloud optical properties (Level 2 products MOD06/MYD06 from the MODIS sensor) extracted on a 127km x 127km grid around the synoptic observation point. To study the large diversity in cloud scenes, an auto-encoder architecture is chosen. The regression task is then carried out in the modelled latent space which is output by the encoder part of the model. To account for the spatial relationships in our input data the model architecture is based on Convolutional Neural Networks. We define a study domain in the Atlantic ocean, around the equator. The combination of information from below and over the cloud could allow us to build a robust model to predict CBH and then extend predictions to regions where surface measurements are not available.

How to cite: Lenhardt, J., Quaas, J., and Sejdinovic, D.: Combining cloud properties and synoptic observations to predict cloud base height using Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7355, https://doi.org/10.5194/egusphere-egu22-7355, 2022.

EGU22-8068 | Presentations | ITS2.7/AS5.2

Generative Adversarial Modeling of Tropical Precipitation and the Intertropical Convergence Zone 

Cody Nash, Balasubramanya Nadiga, and Xiaoming Sun

In this study we evaluate the use of generative adversarial networks (GANs) to model satellite-based estimates of precipitation conditioned on reanalysis temperature, humidity, wind, and surface latent heat flux.  We are interested in the climatology of precipitation and modeling it in terms of atmospheric state variables, in contrast to a weather forecast or precipitation nowcast perspective.  We consider a hierarchy of models in terms of complexity, including simple baselines, generalized linear models, gradient boosted decision trees, pointwise GANs and deep convolutional GANs. To gain further insight into the models we apply methods for analyzing machine learning models, including model explainability, ablation studies, and a diverse set of metrics for pointwise and distributional differences, including information theory based metrics.  We find that generative models significantly outperform baseline models on metrics based on the distribution of predictions, particularly in capturing the extremes of the distributions.  Overall, a deep convolutional model achieves the highest accuracy.  We also find that the relative importance of atmospheric variables and of their interactions vary considerably among the different models considered. 

How to cite: Nash, C., Nadiga, B., and Sun, X.: Generative Adversarial Modeling of Tropical Precipitation and the Intertropical Convergence Zone, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8068, https://doi.org/10.5194/egusphere-egu22-8068, 2022.

EGU22-8130 | Presentations | ITS2.7/AS5.2

A comparison of explainable AI solutions to a climate change prediction task 

Philine Lou Bommer, Marlene Kretschmer, Dilyara Bareeva, Kadircan Aksoy, and Marina Höhne

In climate change research we are dealing with a chaotic system, usually leading to huge computational efforts in order to make faithful predictions. Deep neural networks (DNNs) offer promising new approaches due to their computational efficiency and universal solution properties. However, despite the increase in successful application cases with DNNs, the black-box nature of such purely data-driven approaches limits their trustworthiness and therefore the useability of deep learning in the context of climate science.

The field of explainable artificial intelligence (XAI) has been established to enable a deeper understanding of the complex, highly-nonlinear methods and their predictions. By shedding light onto the reasons behind the predictions made by DNNs, XAI methods can serve as a support for researchers to reveal the underlying physical mechanisms and properties inherent in the studied data. Some XAI methods have already been successfully applied to climate science, however, no detailed comparison of their performances is available. As the number of XAI methods on the one hand, and DNN applications on the other hand are growing, a comprehensive evaluation is necessary in order to understand the different XAI methods in the climate context.

In this work we provide an overview of different available XAI methods and their potential applications for climate science. Based on a previously published climate change prediction task, we compare several explanation approaches, including model-aware (e.g. Saliency, IntGrad, LRP) and model-agnostic methods (e.g. SHAP). We analyse their ability to verify the physical soundness of the DNN predictions as well as their ability to uncover new insights into the underlying climate phenomena. Another important aspect we address in our work is the possibility to assess the underlying uncertainties of DNN predictions using XAI methods. This is especially crucial in climate science applications where uncertainty due to natural variability is usually large. To this end, we investigate the potential of two recently introduced XAI methods —UAI+ and NoiseGrad, which have been designed to include uncertainty information of the predictions into the explanations. We demonstrate that those XAI methods enable more stable explanations with respect to model noise and can further deal with uncertainties of network information. We argue that these methods are therefore particularly suitable for climate science application cases.

How to cite: Bommer, P. L., Kretschmer, M., Bareeva, D., Aksoy, K., and Höhne, M.: A comparison of explainable AI solutions to a climate change prediction task, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8130, https://doi.org/10.5194/egusphere-egu22-8130, 2022.

Despite the importance of the Atlantic Meridional Overturning Circulation (AMOC) to the climate on decadal and multidecadal timescales, Earth System Models (ESM) exhibit large differences in their estimation of the amplitude and spectrum of its variability. In addition, observational data is sparse and before the onset of the current century, many reconstructions of the AMOC rely on linear relationships to the more readily observed surface properties of the Atlantic rather than the less explored deeper ocean. Yet, it is conceptually well established that the density distribution is dynamically closely related to the AMOC, and in this contribution, we investigate this connection in model simulations to identify which density information is necessary to reconstruct the AMOC. We chose to establish these links in a data-driven approach. 

We use simulations from a historically forced large ensemble as well as abruptly forced long term simulations with varying strength of forcing and therefore comprising vastly different states of the AMOC. In a first step, we train uncertainty-aware neural networks to infer the state of the AMOC from the density information at different layers in the North Atlantic. In a second step, we compare the performance of the trained neural networks across depth and with their linear counterparts in simulations that were not part of the training process. Finally, we investigate how the networks arrived at their specific prediction using Layer-Wise-Relevance Propagation (LRP), a recently developed technique that propagates relevance backwards through the network to the input density field, effectively filtering out important from unimportant information and identifying regions of high relevance for the reconstruction of the AMOC.

Our preliminary results show that in general, the information provided by only one density layer between the surface and 1100 m is sufficient to reconstruct the AMOC with high precision, and neural networks are capable of generalizing to unseen simulations. From the set of these neural networks trained on different layers, we choose the surface layer as well as one subsurface layer close to 1000 m for further investigation of their decision-making process using LRP. Our preliminary investigation reveals that the LRP in the subsurface layer identifies regions of potentially high physical relevance for the AMOC. By contrast, the regions identified in the surface layer show little physical relevance for the AMOC.

How to cite: Mayer, B., Barnes, E., Marotzke, J., and Baehr, J.: Reconstructing the Atlantic Meridional Overturning Circulation in Earth System Model simulations from density information using explainable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8411, https://doi.org/10.5194/egusphere-egu22-8411, 2022.

EGU22-8454 | Presentations | ITS2.7/AS5.2

Using Generative Adversarial Networks (GANs) to downscale tropical cyclone precipitation. 

Emily Vosper, Dann Mitchell, Peter Watson, Laurence Aitchison, and Raul Santos-Rodriguez

Fluvial flood hazards from tropical cyclones (TCs) are frequently the leading cause of mortality and damages (Rezapour and Baldock, 2014). Accurately modeling TC precipitation is vital for studying the current and future impacts of TCs. However, general circulation models at typical resolution struggle to accurately reproduce TC rainfall, especially for the most extreme storms (Murakami et al., 2015). Increasing horizontal resolution can improve precipitation estimates (Roberts et al., 2020; Zhang et al., 2021), but as these methods are computationally expensive there is a trade-off between accuracy and generating enough ensemble members to generate sufficient high impact, low probability events. Often, downscaling models are used as a computationally cheaper alternative. 

Here, we downscale TC precipitation data from 100 km to 10 km resolution using a generative adversarial network (GAN). Generative approaches have the potential to well reproduce the fine spatial detail and stochastic nature of precipitation (Ravuri et al., 2021). Using observational products for tracking (IBTrACS) and rainfall (MSWEP), we train our GAN over the historical period 1979 - 2020. We are interested in how well our model reproduces precipitation intensity and structure with a focus on the most extreme events, where models have traditionally struggled. 

Bibliography 

Murakami, H., et al., 2015. Simulation and Prediction of Category 4 and 5 Hurricanes in the High-Resolution GFDL HiFLOR Coupled Climate Model*. Journal of Climate, 28(23), pp.9058-9079. 

Ravuri, S., et al., 2021. Skilful precipitation nowcasting using deep generative models of radar. Nature, 597(7878), pp.672-677. 

Rezapour, M. and Baldock, T., 2014. Classification of Hurricane Hazards: The Importance of Rainfall. Weather and Forecasting, 29(6), pp.1319-1331. 

Roberts, M., et al., 2020. Impact of Model Resolution on Tropical Cyclone Simulation Using the HighResMIP–PRIMAVERA Multimodel Ensemble. Journal of Climate, 33(7), pp.2557-2583. 

Zhang, W., et al., 2021. Tropical cyclone precipitation in the HighResMIP atmosphere-only experiments of the PRIMAVERA Project. Climate Dynamics, 57(1-2), pp.253-273. 

How to cite: Vosper, E., Mitchell, D., Watson, P., Aitchison, L., and Santos-Rodriguez, R.: Using Generative Adversarial Networks (GANs) to downscale tropical cyclone precipitation., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8454, https://doi.org/10.5194/egusphere-egu22-8454, 2022.

EGU22-8499 | Presentations | ITS2.7/AS5.2 | Highlight

Matryoshka Neural Operators: Learning Fast PDE Solvers for Multiscale Physics 

Björn Lütjens, Catherine H. Crawford, Campbell Watson, Chris Hill, and Dava Newman

Running a high-resolution global climate model can take multiple days on the world's largest supercomputers. Due to the long runtimes that are caused by solving the underlying partial differential equations (PDEs), climate researchers struggle to generate ensemble runs that are necessary for uncertainty quantification or exploring climate policy decisions.

 

Physics-informed neural networks (PINNs) promise a solution: they can solve single instances of PDEs up to three orders of magnitude faster than traditional finite difference numerical solvers. However, most approaches in physics-informed machine learning learn the solution of PDEs over the full spatio-temporal domain, which requires infeasible amounts of training data, does not exploit knowledge of the underlying large-scale physics, and reduces model trust. Our philosophy is to limit learning to the hard-to-model parts. Hence, we are proposing a novel method called \textit{matryoshka neural operator} that leverages an old scheme called super-parametrizations developed in geophysical fluid dynamics. Using this scheme our proposed physics-informed architecture exploits knowledge of approximate large-scale dynamics and only learns the influence of small-scale dynamics onto large-scale dynamics, also called subgrid parametrizations.

 

Some work in geophysical fluid dynamics is conceptually similar, but fully relies on neural networks which can only operate on fixed grids (Gentine et al., 2018). We are the first to learn grid-independent subgrid parametrizations by leveraging neural operators that learn the dynamics in a grid-independent latent space. Neural operators can be seen as an extension of neural networks to infinite-dimensions: They encode infinite-dimensional inputs into a finite-dimensional representations, such as Eigen or Fourier modes, and learn the nonlinear temporal dynamics in the encoded state.

 

We demonstrate the neural operators for learning non-local subgrid parametrizations over the full large-scale domain of the two-scale Lorenz96 equation. We show that the proposed learning-based PDE solver is grid-independent, has quasilinear instead of quadratic complexity in comparison to a fully-resolving numerical solver, is more accurate than current neural network or polynomial-based parametrizations, and offers interpretability through Fourier modes.

 

Gentine, P., Pritchard, M., Rasp, S., Reinaudi, G., and Yacalis, G. (2018). Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45, 5742– 5751. https://doi.org/10.1029/2018GL078202

How to cite: Lütjens, B., Crawford, C. H., Watson, C., Hill, C., and Newman, D.: Matryoshka Neural Operators: Learning Fast PDE Solvers for Multiscale Physics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8499, https://doi.org/10.5194/egusphere-egu22-8499, 2022.

EGU22-8649 | Presentations | ITS2.7/AS5.2

Physically Based Deep Learning Framework to Model Intense Precipitation Events at Engineering Scales 

Bernardo Teufel, Fernanda Carmo, Laxmi Sushama, Lijun Sun, Naveed Khaliq, Stephane Belair, Asaad Yahia Shamseldin, Dasika Nagesh Kumar, and Jai Vaze

The high computational cost of super-resolution (< 250 m) climate simulations is a major barrier for generating climate change information at such high spatial and temporal resolutions required by many sectors for planning local and asset-specific climate change adaptation strategies. This study couples machine learning and physical modelling paradigms to develop a computationally efficient simulator-emulator framework for generating super-resolution climate information. To this end, a regional climate model (RCM) is applied over the city of Montreal, for the summers of 2015 to 2020, at 2.5 km (i.e., low resolution – LR) and 250 m (i.e., high resolution – HR), which is used to train and validate the proposed super-resolution deep learning (DL) model. In the field of video super-resolution, convolutional neural networks combined with motion compensation have been used to merge information from multiple LR frames to generate high-quality HR images. In this study, a recurrent DL approach based on passing the generated HR estimate through time helps the DL model to recreate fine details and produce temporally consistent fields, resembling the data assimilation process commonly used in numerical weather prediction. Time-invariant HR surface fields and storm motion (approximated by RCM-simulated wind) are also considered in the DL model, which helps further improve output realism. Results suggest that the DL model is able to generate HR precipitation estimates with significantly lower errors than other methods used, especially for intense short-duration precipitation events, which often occur during the warm season and are required to evaluate climate resiliency of urban storm drainage systems. The generic and flexible nature of the developed framework makes it even more promising as it can be applied to other climate variables, periods and regions.

How to cite: Teufel, B., Carmo, F., Sushama, L., Sun, L., Khaliq, N., Belair, S., Shamseldin, A. Y., Nagesh Kumar, D., and Vaze, J.: Physically Based Deep Learning Framework to Model Intense Precipitation Events at Engineering Scales, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8649, https://doi.org/10.5194/egusphere-egu22-8649, 2022.

EGU22-8656 | Presentations | ITS2.7/AS5.2 | Highlight

Conditional normalizing flow for predicting the occurrence of rare extreme events on long time scales 

Jakob Kruse, Beatrice Ellerhoff, Ullrich Köthe, and Kira Rehfeld

The socio-economic impacts of rare extreme events, such as droughts, are one of the main ways in which climate affects humanity. A key challenge is to quantify the changing risk of once-in-a-decade or even once-in-a-century events under global warming, while leaning heavily on comparatively short observation spans. The predictive power of classical statistical methods from extreme value theory (EVT) often remains limited to uncorrelated events with short return periods. This is mainly due to their strong assumption of an underlying exponential family distribution of the variable in question. Standard EVT is therefore at odds with the rich and large-scale correlations found in various surface climate parameters such as local temperatures, as well as the more complex shape of empirical distributions. Here, we turn to recent developments in machine learning, namely to conditional normalizing flows, which are flexible neural networks for modeling highly-correlated unknown distributions. Given a short time series, we show how such networks can model the posterior probability of events whose return periods are much longer than the observation span. The necessary correlations and patterns can be extracted from a paired set of inputs, i.e. time series, and outputs, i.e. return periods. To evaluate this approach in a controlled setting, we generate synthetic training data by sampling temporally autoregressive processes with a non-trivial covariance structure. We compare the results to a baseline analysis using EVT. In this work, we focus on the prediction of return periods of rare statistical events. However, we expect the same potential for a wide range of statistical measures, such as the power spectrum and rate functions. Future work should also investigate its applicability to compound and spatially extended events, as well as changing conditions under warming scenarios.

How to cite: Kruse, J., Ellerhoff, B., Köthe, U., and Rehfeld, K.: Conditional normalizing flow for predicting the occurrence of rare extreme events on long time scales, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8656, https://doi.org/10.5194/egusphere-egu22-8656, 2022.

EGU22-8848 | Presentations | ITS2.7/AS5.2 | Highlight

Defining regime specific cloud sensitivities using the learnings from machine learning 

Alyson Douglas and Philip Stier

Clouds remain a core uncertainty in quantifying Earth’s climate sensitivity due to their complex dynamical and microphysical  interactions with multiple components of the Earth system. Therefore it is pivotal to observationally constrain possible cloud changes in a changing climate in order to evaluate our current generation of Earth system models by a set of physically realistic sensitivities. We developed a novel observational regime framework from over 15 years of MODIS satellite observations, from which we have derived a set of regimes of cloud controlling factors. These regimes were established using the relationship strength, as measured by using the weights of a trained, simple machine learning model. We apply these as observational constraints on the ​​r1i1p1f1 and r1i1p1f3 historical runs from various CMIP6 models to test if CMIP6 climate models can accurately represent key cloud controlling factors.. Within our regime framework, we can compare the observed environmental drivers and sensitivities of each regime against the parameterization-driven, modeled outcomes. We find that, for almost every regime, CMIP6 models do not properly represent the global distribution of occurrence, raising into question how much we can trust our range of climate sensitivities when specific cloud controlling factors are so badly represented by these models. This is especially pertinent in southern ocean and marine stratocumulus regimes, as the changes in these clouds’ optical depths and cloud amount have increased the ECS from CMIP5 to CMIP6. Our results suggest that these uncertainties in CMIP6 cloud parameterizations propagate into derived cloud feedbacks and ultimately climate sensitivity, which is evident from a regimed based analysis of cloud controlling factors.

How to cite: Douglas, A. and Stier, P.: Defining regime specific cloud sensitivities using the learnings from machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8848, https://doi.org/10.5194/egusphere-egu22-8848, 2022.

EGU22-9112 | Presentations | ITS2.7/AS5.2

Causal Orthogonal Functions: A Causal Inference approach to temporal feature extraction 

Nicolas-Domenic Reiter, Jakob Runge, and Andreas Gerhardus

Understanding complex dynamical systems is a major challenge in many scientific disciplines. There are two aspects which are of particular interest when analyzing complex dynamical systems: 1) the temporal patterns along which they evolve and 2) the governing causal mechanisms.

Temporal patterns in a time-series can be extracted and analyzed through a variety of time-series representations, that is a collection of filters. Discrete Wavelet and Fourier Transforms are prominent examples and have been widely applied to investigate the temporal structure of dynamical systems.

Causal Inference is a framework formalizing questions of cause and effect. In this work we propose an elementary and systematic approach to combine time-series representations with Causal Inference. Hereby we introduce a notion of cause and effect with respect to a pair of arbitrary time-series filters. Using a Singular Value Decomposition we derive an alternative representation of how one process drives another over a specified time-period. We call the building blocks of this representation Causal Orthogonal Functions. Combining the notion of Causal Orthogonal Functions with a Wavelet or Fourier decomposition of a time-series yields time-scale specific Causal Orthogonal Functions. As a result we obtain a time-scale specific representation of the causal influence one process has on another over some fixed time-period. This allows to conduct causal effect analysis in discrete-time stochastic dynamical systems at multiple time-scales. We illustrate our approach by examining linear VAR processes.

How to cite: Reiter, N.-D., Runge, J., and Gerhardus, A.: Causal Orthogonal Functions: A Causal Inference approach to temporal feature extraction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9112, https://doi.org/10.5194/egusphere-egu22-9112, 2022.

Outliers detection generally aims at identifying extreme events and insightful changes in climate behavior. One important type of outlier is pattern outlier also called discord, where the outlier pattern detected covers a time interval instead of a single point in the time series. Machine learning contributes many algorithms and methods in this field especially unsupervised algorithms for different types of data time series. In a first submitted paper, we have investigated discord detection applied to climate-related impact observations. We have introduced the prominent discord notion, a contextual concept that derives a set of insightful discords by identifying dependencies among variable length discords, and which is ordered based on the number of discords they subsume. 

Following this study, here we propose a ranking function based on the length of the first subsumed discord and the total length of the prominent discord, and make use of the powerful matrix profile technique. Preliminary results show that our approach, applied to monthly runoff timeseries between 1902 and 2005 over West Africa, detects both the emergence of long term change with the associated former climate regime, and the regional driest decade (1982-1992) of the 20th century (i.e. climate extreme event). In order to demonstrate the genericity and multiple insights gained by our method, we go further by evaluating the approach on other impact (e.g. crop data, fires, water storage) and climate (precipitation and temperature) observations, to provide similar results on different variables, extract relationships among them and identify what constitutes a prominent discord in such cases. A further step will consist in evaluating our methodology on climate and impact historical simulations, to determine if prominent discords highlighted in observations can be captured in climate and impact models.

How to cite: El Khansa, H., Gervet, C., and Brouillet, A.: Prominent discords in climate data through matrix profile techniques: detecting emerging long term pattern changes and anomalous events , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9250, https://doi.org/10.5194/egusphere-egu22-9250, 2022.

EGU22-9281 | Presentations | ITS2.7/AS5.2

Machine learning-based identification and classification of ocean eddies 

Eike Bolmer, Adili Abulaitijiang, Jürgen Kusche, Luciana Fenoglio-Marc, Sophie Stolzenberger, and Ribana Roscher

The automatic detection and tracking of mesoscale ocean eddies, the ‘weather of the ocean’, is a well-known task in oceanography. These eddies have horizontal scales from 10 km up to 100 km and above. They transport water mass, heat, nutrition, and carbon and have been identified as hot spots of biological activity. Monitoring eddies is therefore of interest among others to marine biologists and fishery. 
Recent advances in satellite-based observation for oceanography such as sea surface height (SSH) and sea surface temperature (SST) result in a large supply of different data products in which eddies are visible. In radar altimetry observations are acquired with repeat cycles between 10 and 35 days and cross-track spacing of a few 10 km to a few 100 km. Therefore, ocean eddies are clearly visible but typically covered by only one ground track. In addition, due to their motion, eddies are difficult to reconstruct, which makes creating detailed maps of the ocean with a high temporal resolution a challenge. In general, they are considered a perturbation, and their influence on altimetry data is difficult to determine, which is especially limiting for the determination of an accurate time-averaged dynamic topography of the ocean.
Due to their spatio-temporal dynamic behavior the identification and tracking are challenging. There is a number of methods that have been developed to identify and track eddies in gridded maps of sea surface height derived from multi-mission data sets. However, these procedures have shortcomings since the gridding process removes information that is valuable in achieving more accurate results.
Therefore, in the project EDDY carried out at the University of Bonn we intend to use ground track data from satellite altimetry and - as a long-term goal - additional remote sensing data such as SST, optical imagery, as well as statistical information from model outputs. The combination of the data will serve as a basis for a multi-modal deep learning algorithm. In detail, we will utilize transformers, a deep neural network architecture, that originates from the field of Natural Language Processing (NLP) and became popular in recent years in the field of computer vision. This method shows promising results in terms of understanding temporal and spatial information, which is essential in detecting and tracking highly dynamic eddies.
In this presentation, we introduce the deep neural network used in the EDDY project and show the results based on gridded data sets for the Gulf stream area for the period 2017 and first results of single-track eddy identification in the region.

How to cite: Bolmer, E., Abulaitijiang, A., Kusche, J., Fenoglio-Marc, L., Stolzenberger, S., and Roscher, R.: Machine learning-based identification and classification of ocean eddies, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9281, https://doi.org/10.5194/egusphere-egu22-9281, 2022.

EGU22-9461 | Presentations | ITS2.7/AS5.2

Data Driven Approaches for Climate Predictability 

Balasubramanya Nadiga

Reduced-order dynamical models play a central role in developing our understanding of predictability of climate. In this context, the Linear Inverse Modeling (LIM) approach (closely related to Dynamic Mode Decomposition DMD), by helping capture a few essential interactions between dynamical components of the full system, has proven valuable in being able to give insights into the dynamical behavior of the full system. While nonlinear extensions of the LIM approach have been attempted none have gained widespread acceptance. We demonstrate that Reservoir Computing (RC), a form of machine learning suited for learning in the context of chaotic dynamics, by exploiting the phenomenon of generalized synchronization, provides an alternative nonlinear approach that comprehensively outperforms the LIM approach.  Additionally, the potential of the RC approach to capture the structure of the climatological attractor and to continue the evolution of the system on the attractor in a realistic fashion long after the ensemble average has stopped tracking the reference trajectory is highlighted. Finally, other dynamical systems based methods and probabilistic deep learning methods are considered and a broader perspective on the use of data-driven methods in understanding climate predictability is offered

How to cite: Nadiga, B.: Data Driven Approaches for Climate Predictability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9461, https://doi.org/10.5194/egusphere-egu22-9461, 2022.

EGU22-9877 | Presentations | ITS2.7/AS5.2

A Conditional Generative Adversarial Network for Rainfall Downscaling 

Marcello Iotti, Paolo Davini, Jost von Hardenberg, and Giuseppe Zappa

Predicting extreme precipitation events is one of the main challenges of climate science in this decade. Despite the continuously increasing computing availability, Global Climate Models’ (GCMs) spatial resolution is still too coarse to correctly represent and predict small-scale phenomena as convection, so that precipitation prediction is still imprecise. Indeed, precipitation shows variability on both spatial and temporal scales (much) smaller than the current state-of-the-art GCMs resolution. Therefore, downscaling techniques play a crucial role, both for the understanding of the phenomenon itself and for applications like e.g. hydrologic studies, risk prediction and emergency management. Seen in the context of image processing, a downscaling procedure has many similarities with super-resolution tasks, i.e. the improvement of the resolution of an image. This scope has taken advantage from the application of Machine Learning techniques, and in particular from the introduction of Convolutional Neural Networks (CNNs).

In our work we exploit a conditional Generative Adversarial Network (cGAN) to train a generator model to perform precipitation downscaling. This generator, a deep CNN, takes as input the precipitation field at the scale resolved by GCMs, adds random noise, and outputs a possible realization of the precipitation field at higher resolution, preserving its statistical properties with respect to the coarse-scale field. The GAN is being trained and tested in a “perfect model” setup, in which we try to reproduce the ERA5 precipitation field starting from an upscaled version of it.

Compared to other downscaling techniques, our model has the advantage of being computationally inexpensive at run time, since the computational load is mostly concentrated in the training phase. We are examining the Greater Alpine Region, upon which numerical models performances are limited by the complex orography. Nevertheless the approach, being independent of physical, statistical and empirical assumptions, can be easily extended to different domains.

How to cite: Iotti, M., Davini, P., von Hardenberg, J., and Zappa, G.: A Conditional Generative Adversarial Network for Rainfall Downscaling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9877, https://doi.org/10.5194/egusphere-egu22-9877, 2022.

EGU22-10120 | Presentations | ITS2.7/AS5.2

A Convolutional Neural Network approach for downscaling climate model data in Trentino-South Tyrol (Eastern Italian Alps) 

Alice Crespi, Daniel Frisinghelli, Tatiana Klisho, Marcello Petitta, Alexander Jacob, and Massimiliano Pittore

Statistical downscaling is a very popular technique to increase the spatial resolution of existing global and regional climate model simulations and to provide reliable climate data at local scale. The availability of tailored information is particularly crucial for conducting local climate assessments, climate change studies and for running impact models, especially in complex terrain. A crucial requirement is the ability to reliably downscale the mean, variability and extremes of climate data, while preserving their spatial and temporal correlations.

Several machine learning-based approaches have been proposed so far to perform such task by extracting non-linear relationships between local-scale variables and large-scale atmospheric predictors and they could outperform more traditional statistical methods. In recent years, deep learning has gained particular interest in geoscientific studies and climate science as a promising tool to improve climate downscaling thanks to its greater ability to extract high-level features from large datasets using complex hierarchical architectures. However, the proper network architecture is highly dependent on the target variable, time and spatial resolution, as well as application purposes and target domain.

This contribution presents a Deep Convolutional Encoder-Decoder Network (DCEDN) architecture which was implemented and evaluated for the first time over Trentino-South Tyrol in the Eastern Italian Alps to derive 1-km climate fields of daily temperature and precipitation from ERA-5 reanalysis. We will show that in-depth optimization of hyper-parameters, loss function choice and sensitivity analyses are essential preliminary steps to derive an effective architecture and enhance the interpretability of results and of their variability. The validation of downscaled fields of both temperature and precipitation confirmed the improved representation of local features for both mean and extreme values, even though lower performances were obtained for precipitation in reproducing small-scale spatial features. In all cases, DCEDN was found to outperform classical schemes based on linear regression and the bias adjustment procedures used as benchmarks. We will discuss in detail the advantages and recommendations for the integration of DCEDN as an efficient post-processing block in climate data simulations supporting local-scale studies. The model constraints in feature extraction, especially for precipitation, over the limited extent of the study domain will also be explained along with potential future developments of such type of networks for improved climate science applications.

How to cite: Crespi, A., Frisinghelli, D., Klisho, T., Petitta, M., Jacob, A., and Pittore, M.: A Convolutional Neural Network approach for downscaling climate model data in Trentino-South Tyrol (Eastern Italian Alps), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10120, https://doi.org/10.5194/egusphere-egu22-10120, 2022.

EGU22-10773 | Presentations | ITS2.7/AS5.2 | Highlight

Choose your own weather adventure: deep weather generation for “what-if” climate scenarios 

Campbell Watson, Jorge Guevara, Daniela Szwarcman, Dario Oliveira, Leonardo Tizzei, Maria Garcia, Priscilla Avegliano, and Bianca Zadrozny

Climate change is making extreme weather more extreme. Given the inherent uncertainty of long-term climate projections, there is growing need for rapid, plausible “what-if” climate scenarios to help users understand climate exposure and examine resilience and mitigation strategies. Since the 1980s, such “what-if” scenarios have been created using stochastic weather generators. However, it is very challenging for traditional weather generation algorithms to create realistic extreme climate scenarios because the weather data being modeled is highly imbalanced, contains spatiotemporal dependencies and has extreme weather events exacerbated by a changing climate.

There are few works comparing and evaluating stochastic multisite (i.e., gridded) weather generators, and no existing work that compares promising deep learning approaches for weather generation with classical stochastic weather generators. We will present the culmination of a multi-year effort to perform a systematic evaluation of stochastic weather generators and deep generative models for multisite precipitation synthesis. Among other things, we show that variational auto-encoders (VAE) offer an encouraging pathway for efficient and controllable climate scenario synthesis – especially for extreme events. Our proposed VAE schema selects events with different characteristics in the normalized latent space (from rare to common) and generates high-quality scenarios using the trained decoder. Improvements are provided via latent space clustering and bringing histogram-awareness to the VAE loss.

This research will serve as a guide for improving the design of deep learning architectures and algorithms for application in Earth science, including feature representation and uncertainty quantification of Earth system data and the characterization of so-called “grey swan” events.

How to cite: Watson, C., Guevara, J., Szwarcman, D., Oliveira, D., Tizzei, L., Garcia, M., Avegliano, P., and Zadrozny, B.: Choose your own weather adventure: deep weather generation for “what-if” climate scenarios, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10773, https://doi.org/10.5194/egusphere-egu22-10773, 2022.

EGU22-10888 | Presentations | ITS2.7/AS5.2

How to utilize deep learning to understand climate dynamics? : An ENSO example. 

Na-Yeon Shin, Yoo-Geun Ham, Jeong-Hwan Kim, Minsu Cho, and Jong-Seong Kug

Many deep learning technologies have been applied to the Earth sciences, including weather forecast, climate prediction, parameterization, resolution improvements, etc. Nonetheless, the difficulty in interpreting deep learning results still prevents their applications to studies on climate dynamics. Here, we applied a convolutional neural network to understand El Niño–Southern Oscillation (ENSO) dynamics from long-term climate model simulations. The deep learning algorithm successfully predicted ENSO events with a high correlation skill of 0.82 for a 9-month lead. For interpreting deep learning results beyond the prediction skill, we first developed a “contribution map,” which estimates how much each grid point and variable contribute to a final output variable. Furthermore, we introduced a “sensitivity,” which estimates how much the output variable is sensitively changed to the small perturbation of the input variables by showing the differences in the output variables. The contribution map clearly shows the most important precursors for El Niño and La Niña developments. In addition, the sensitivity clearly reveals nonlinear relations between the precursors and the ENSO index, which helps us understand the respective role of each precursor. Our results suggest that the contribution map and sensitivity would be beneficial for understanding other climate phenomena.

How to cite: Shin, N.-Y., Ham, Y.-G., Kim, J.-H., Cho, M., and Kug, J.-S.: How to utilize deep learning to understand climate dynamics? : An ENSO example., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10888, https://doi.org/10.5194/egusphere-egu22-10888, 2022.

EGU22-11111 | Presentations | ITS2.7/AS5.2

Machine learning based estimation of regional Net Ecosystem Exchange (NEE) constrained by atmospheric inversions and ecosystem observations 

Samuel Upton, Ana Bastos, Fabian Gans, Basil Kraft, Wouter Peters, Jacob Nelson, Sophia Walther, Martin Jung, and Markus Reichstein

Accurate estimates and predictions of the global carbon fluxes are critical for our understanding of the global carbon cycle and climate change. Reducing the uncertainty of the terrestrial carbon sink and closing the budget imbalance between sources and sinks would improve our ability to accurately project future climate change. Net Ecosystem Exchange (NEE), the net flux of biogenic carbon from the land surface to the atmosphere, is only directly measured at a sparse set of globally distributed eddy-covariance measurement sites. To estimate the terrestrial carbon flux at the regional and global scale, a global gridded estimate of NEE must be accurately upscaled from a model trained at the ecosystem level. In this study, the Fluxcom system* is used to train a site-level model on remotely-sensed and meteorological variables derived from site measurements, MODIS and ECMWF ERA5 atmospheric reanalysis data. The non-representative distribution of these site-level data along with missing disturbance histories impart known biases to current upscaling efforts. Observations of atmospheric carbon may provide important additional information, improving the accuracy of the upscaled flux estimate. 

This study adds an atmospheric observational operator to the model training process that connects the ecosystem-level flux model to top-down observations of atmospheric carbon by adding an additional term to the objective function. The target data are regionally integrated fluxes from an ensemble of atmospheric inversions corrected for fossil-fuel emissions and lateral fluxes.  Calculating the regionally integrated flux estimate at each training step is computationally infeasible. Our hypothesis is that the regional flux can be modeled with a limited set of points and that this sparse model preserves sufficient information about the phenomena to act as a constraint for the underlying ecosystem-level model, improving regional and global upscaled products.  Experimental results show improvements in the machine learning based regional estimates of NEE while preserving features such as the seasonal variability in the estimated flux.

 

*Jung, Martin, Christopher Schwalm, Mirco Migliavacca, Sophia Walther, Gustau Camps-Valls, Sujan Koirala, Peter Anthoni, et al. 2020. “Scaling Carbon Fluxes from Eddy Covariance Sites to Globe: Synthesis and Evaluation of the FLUXCOM Approach.” Biogeosciences 17 (5): 1343–65. 

 

How to cite: Upton, S., Bastos, A., Gans, F., Kraft, B., Peters, W., Nelson, J., Walther, S., Jung, M., and Reichstein, M.: Machine learning based estimation of regional Net Ecosystem Exchange (NEE) constrained by atmospheric inversions and ecosystem observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11111, https://doi.org/10.5194/egusphere-egu22-11111, 2022.

EGU22-11216 | Presentations | ITS2.7/AS5.2

Unsupervised clustering of Lagrangian trajectories in the Labrador Current 

Noémie Planat and Mathilde Jutras

Lagrangian studies are a widely-used and powerful way to analyse and interpret phenomenons in oceanography and atmospheric sciences. Such studies can be based on dataset either consisting of real trajectories (e.g. oceanic drifters or floats) or of virtual trajectories computed from velocity outputs from model or observation-derived velocities. Such data can help investigate pathways of water masses, pollutants or storms, or identify important convection areas to name a few. As many of these analyses are based on large volumes of data that can be challenging to examine, machine learning can provide an efficient and automated way to classify information or detect patterns.

Here, we present an application of unsupervised clustering to the identification of the main pathways of the shelf-break branch of the Labrador Current, a critical component of the North Atlantic circulation. The current flows southward along the Labrador Shelf and splits in the region of the Grand Banks, either retroflecting north-eastward and feeding the subpolar basin of the North Atlantic Ocean (SPNA) or continuing westward along the shelf-break, feeding the Slope Sea and the east coast of North America. The proportion feeding each area impacts their salinity and convection, as well as their biogeochemistry, with consequences on marine life.

Our dataset is composed of millions of virtual particle trajectories computed from the water velocities of the GLORYS12 ocean reanalysis. We implement an unsupervised Machine Learning clustering algorithm on the shape of the trajectories. The algorithm is a kernalized k-means++ algorithm with a minimal number of hyperparameters, coupled to a kernalized Principal Component Analysis (PCA) features reduction. We will present the pre-processing of the data, as well as canonical and physics-based methods for choosing the hyperparameters. 

The algorithm identifies six main pathways of the Labrador Current. Applying the resulting classification method to 25 years of ocean reanalysis, we quantify the relative importance of the six pathways in time and construct a retroflection index that is used to study the drivers of the retroflection variability. This study highlights the potential of such a simple clustering method for Lagrangian trajectory analysis in oceanography or in other climate applications.

How to cite: Planat, N. and Jutras, M.: Unsupervised clustering of Lagrangian trajectories in the Labrador Current, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11216, https://doi.org/10.5194/egusphere-egu22-11216, 2022.

EGU22-11388 | Presentations | ITS2.7/AS5.2 | Highlight

Learning ENSO-related Principal Modes of Vegetation via a Granger-Causal Variational Autoencoder 

Gherardo Varando, Miguel-Ángel Fernández-Torres, and Gustau Camps-Valls

Tackling climate change needs to understand the complex phenomena occurring on the Planet. Discovering  teleconnection patterns is an essential part of the endeavor. Events like El Niño Southern Oscillation (ENSO) impact essential climate variables at large distances, and influence the underlying Earth system dynamics. However, their automatic identification from the wealth of observational data is still unresolved. Nonlinearities, nonstationarities and the (ab)use of correlation analyses hamper the discovery of true causal patterns.  Classical approaches proceed by first, extracting principal modes of variability and second, by performing lag-correlations or Granger causal analysis to identify possible teleconnections. While the principal modes are an effective representation of the data, they could be causally not meaningful. 
To address this, we here introduce a deep learning methodology that extracts nonlinear latent representations from spatio-temporal Earth data that are Granger causal with the index altogether. The proposed algorithm consists of a variational autoencoder trained with an additional causal penalization that enforces the latent representation to be (partially) Granger-causally related to the considered signal. The causal loss term is obtained by training two additional autoregressive models to forecast some of the latent signals, one of them including the target signal as predictor. The causal penalization is finally computed by comparing the log variances of the two autoregressive models, similarly to the standard Granger causality approach. 

The major drawback of deep autoencoders with respect to the classical linear principal component approaches is the lack of a straightforward interpretability of the representations learned. 
To address this point we perform synthetic interventions in the latent space and analyse the differences in the recovered NDVI signal.
We illustrate the feasibility of the approach described to study the impact of ENSO on vegetation, which allows for a more rigorous study of impacts on ecosystems globally. The output maps show NDVI patterns which are consistent with the known phenomena induced by El Niño event. 

How to cite: Varando, G., Fernández-Torres, M.-Á., and Camps-Valls, G.: Learning ENSO-related Principal Modes of Vegetation via a Granger-Causal Variational Autoencoder, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11388, https://doi.org/10.5194/egusphere-egu22-11388, 2022.

EGU22-11451 | Presentations | ITS2.7/AS5.2

Time evolution of temperature profiles retrieved from 13 years of IASI data using an artificial neural network 

Marie Bouillon, Sarah Safieddine, Simon Whitburn, Lieven Clarisse, Filipe Aires, Victor Pellet, Olivier Lezeaux, Noëlle A. Scott, Marie Doutriaux-Boucher, and Cathy Clerbaux

The IASI remote sensor measures Earth’s thermal infrared radiation over 8461 channels between 645 and 2760 cm-1. Atmospheric temperatures at different altitudes can be retrieved from the radiances measured in the CO2 absorption bands (645-800 cm-1 and 2250-2400 cm-1) by selecting the channels that are the most sensitive to the temperature profile. The three IASI instruments on board of the Metop suite of satellites launched in 2006, 2012 and 2018, will provide a long time series for temperature, adequate for studying the long term evolution of atmospheric temperature. However, over the past 14 years, EUMETSAT, who processes radiances and computes atmospheric temperatures, has carried out several updates on the processing algorithms for both radiances and temperatures, leading to non-homogeneous time series and thus large difficulties in the computation of trends for temperature and atmospheric composition.

 

In 2018, EUMETSAT has reprocessed the radiances with the most recent version of the algorithm and there is now a homogeneous radiance dataset available. In this study, we retrieve a new temperature record from the homogeneous IASI radiances using an artificial neural network (ANN). We train the ANN with IASI radiances as input and the European Centre for Medium-Range Weather Forecasts reanalysis ERA5 temperatures as output. We validate the results using ERA5 and in situ radiosonde temperatures from the ARSA database. Between 750 and 7 hPa, where IASI has most of its sensitivity, a very good agreement is observed between the 3 datasets. This work suggests that ANN can be a simple yet powerful tool to retrieve IASI temperatures at different altitudes in the upper troposphere and in the stratosphere, allowing us to construct a homogeneous and consistent temperature data record.

 

We use this new dataset to study extreme events such as sudden stratospheric warmings, and to compute trends over the IASI coverage period [2008-2020]. We find that in the past thirteen years, there is a general warming trend of the troposphere, that is more important at the poles and at mid latitudes (0.5 K/decade at mid latitudes, 1 K/decade at the North Pole). The stratosphere is globally cooling on average, except at the South Pole as a result of the ozone layer recovery and a sudden stratospheric warming in 2019. The cooling is most pronounced in the equatorial upper stratosphere (-1 K/decade).

How to cite: Bouillon, M., Safieddine, S., Whitburn, S., Clarisse, L., Aires, F., Pellet, V., Lezeaux, O., Scott, N. A., Doutriaux-Boucher, M., and Clerbaux, C.: Time evolution of temperature profiles retrieved from 13 years of IASI data using an artificial neural network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11451, https://doi.org/10.5194/egusphere-egu22-11451, 2022.

Existing databases for extreme weather events such as floods, heavy rainfall events, or droughts are heavily reliant on authorities and weather services manually entering details about the occurrence of an event. This reliance has led to a massive geographical imbalance in the likelihood of extreme weather events being recorded, with a vast number of events especially in the developing world remaining unrecorded. With continuing climate change, a lack of systematic extreme weather accounting in developing countries can lead to a substantial misallocation of funds for adaptation measures. To address this imbalance, in this pilot study we combine socio-economic data with climate and geographic data and use several machine-learning algorithms as well as traditional (spatial) econometric tools to predict the occurrence of extreme weather events and their impacts in the absence of information from manual records. Our preliminary results indicate that machine-learning approaches for the detection of the impacts of extreme weather could be a crucial tool in establishing a coherent global disaster record system. Such systems could also play a role in discussions around future Loss and Damages.

How to cite: Schwarz, M. and Pretis, F.: Filling in the Gaps: Consistently detecting previously unidentified extreme weather event impacts, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12165, https://doi.org/10.5194/egusphere-egu22-12165, 2022.

EGU22-12720 | Presentations | ITS2.7/AS5.2 | Highlight

Interpretable Deep Learning for Probabilistic MJO Prediction 

Hannah Christensen and Antoine Delaunay

The Madden–Julian Oscillation (MJO) is the dominant source of sub-seasonal variability in the tropics. It consists of an Eastward moving region of enhanced convection coupled to changes in zonal winds. It is not possible to predict the precise evolution of the MJO, so subseasonal forecasts are generally probabilistic. Ideally the spread of the forecast probability distribution would vary from day to day depending on the instantaneous predictability of the MJO. Operational subseasonal forecasting models do not have this property. We present a deep convolutional neural network that produces skilful state-dependent probabilistic MJO forecasts. This statistical model accounts for intrinsic chaotic uncertainty by predicting the standard deviation about the mean, and model uncertainty using a Monte-Carlo dropout approach. Interpretation of the mean forecasts from the neural network highlights known MJO mechanisms, providing confidence in the model, while interpretation of the predicted uncertainty indicates new physical mechanisms governing MJO predictability.

How to cite: Christensen, H. and Delaunay, A.: Interpretable Deep Learning for Probabilistic MJO Prediction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12720, https://doi.org/10.5194/egusphere-egu22-12720, 2022.

EGU22-12822 | Presentations | ITS2.7/AS5.2

Assessing model dependency in CMIP5 and CMIP6 based on their spatial dependency structure with probabilistic network models 

Catharina Elisabeth Graafland and Jose Manuel Gutiérrez Gutiérrez

Probabilistic network models (PNMs) are well established data-driven modeling and machine learning prediction techniques used in many disciplines, including climate analysis. These techniques can efficiently learn the underlying (spatial) dependency structure and a consistent probabilistic model from data (e.g. gridded reanalysis or GCM outputs for particular variables; near surface temperature in this work), thus constituting a truly probabilistic backbone of the system underlying the data. The complex structure of the dataset is encoded using both pairwise and conditional dependencies and can be explored and characterized using network and probabilistic metrics. When applied to climate data, it is shown that Bayesian networks faithfully reveal the various long‐range teleconnections relevant in the dataset, in particular those emerging in el niño periods (Graafland, 2020).

 

In this work we apply probabilistic Gaussian networks to extract and characterize most essential spatial dependencies of the simulations generated by the different GCMs contributing to CMIP5 and 6 (Eyring 2016). In particular we analyze the problem of model interdependency (Boe, 2018) which poses practical problems for the application of these multi-model simulations in practical applications (it is often not clear what exactly makes one model different from or similar to another model).  We show that probabilistic Gaussian networks provide a promising tool to characterize the spatial structure of GCMs using simple metrics which can be used to analyze how and where differences in dependency structures are manifested. The probabilistic distance measure allows to chart CMIP5 and CMIP6 models on their closeness to reanalysis datasets that rely on observations. The measures also identifies significant atmospheric model changes that underwent CMIP5 GCMs in their transition to CMIP6. 

 

References:

 

Boé, J. Interdependency in Multimodel Climate Projections: Component Replication and Result Similarity. Geophys. Res. Lett. 45, 2771–2779, DOI: 10.1002/2017GL076829 (2018).

 

Eyring, V. et al. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model. Dev. 9, 1937–1958, DOI: 10.5194/gmd-9-1937-2016  (2016).

 

Graafland, C.E., Gutiérrez, J.M., López, J.M. et al. The probabilistic backbone of data-driven complex networks: an example in climate. Sci Rep 10, 11484 (2020). DOI: 10.1038/s41598-020-67970-y



Acknowledgement

 

The authors would like to acknowledge project ATLAS (PID2019-111481RB-I00) funded by MCIN/AEI (doi:10.13039/501100011033). We also acknowledge support from Universidad de Cantabria and Consejería de Universidades, Igualdad, Cultura y Deporte del Gobierno de Cantabria via the “instrumentación y ciencia de datos para sondear la naturaleza del universo” project for funding this work. L.G. acknowledges support from the Spanish Agencia Estatal de Investigación through the Unidad de Excelencia María de Maeztu with reference MDM-2017-0765.



How to cite: Graafland, C. E. and Gutiérrez, J. M. G.: Assessing model dependency in CMIP5 and CMIP6 based on their spatial dependency structure with probabilistic network models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12822, https://doi.org/10.5194/egusphere-egu22-12822, 2022.

EGU22-12858 | Presentations | ITS2.7/AS5.2

Identifying drivers of extreme reductions in carbon uptake of forests with interpretable machine learning 

Mohit Anand, Gustau Camps-Valls, and Jakob Zscheischler

Forests form one of the major components of the carbon cycle and take up large amounts of carbon dioxide from the atmosphere, thereby slowing down the rate of climate change. Carbon uptake by forests is a highly complex process strongly controlled by meteorological forcing, mainly because of two reasons. First, forests have a large storage capacity acting as a buffer to short-duration changes in meteorological drivers. The response can thus be very complex and extend over a long time. Secondly, the responses are often triggered by combinations of multiple compounding drivers including precipitation, temperature and solar radiation. Effects may compound between variables and across time. Therefore, a large amount of data is required to identify the complex drivers of adverse forest response to climate forcing. Recent advances in machine learning offer a suite of promising tools to analyse large amounts of data and address the challenge of identifying complex drivers of impacts. Here we analyse the potential of machine learning to identify the compounding drivers of reduced carbon uptake/forest mortality. To this end, we generate 200,000 years of gross and net carbon uptake from the physically-based forest model FORMIND simulating a beech forest in Germany. The climate data is generated through a weather generator (AWEGEN-1D) from bias-corrected ERA5 reanalysis data.  Classical machine learning models like random forest, support vector machines and deep neural networks are trained to estimate gross primary product. Deep learning models involving convolutional layers are found to perform better than the other classical machine learning models. Initial results show that at least three years of weather data are required to predict annual carbon uptake with high accuracy, highlighting the complex lagged effects that characterize forests. We assess the performance of the different models and discuss their interpretability regarding the identification of impact drivers.



How to cite: Anand, M., Camps-Valls, G., and Zscheischler, J.: Identifying drivers of extreme reductions in carbon uptake of forests with interpretable machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12858, https://doi.org/10.5194/egusphere-egu22-12858, 2022.

EGU22-13345 | Presentations | ITS2.7/AS5.2

A novel approach to systematically analyze the error structure of precipitation datasets using decision trees 

Xinxin Sui, Zhi Li, Guoqiang Tang, Zong-Liang Yang, and Dev Niyogi
Multiple environmental factors influence the error structure of precipitation datasets. The conventional precipitation evaluation method over-simply analyzes how the statistical indicators vary with one or two factors via dimensionality reduction. As a result, the compound influences of multiple factors are superposed rather than disassembled. To overcome this deficiency, this study presents a novel approach to systematically and objectively analyze the error structure within precipitation products using decision trees. This data-driven method can analyze multiple factors simultaneously and extract the compound effects of various influencers. By interpreting the decision tree structures, the error characteristics of precipitation products are investigated. Three types of precipitation products (two satellite-based: ‘top-down’ IMERG and ‘bottom-up’ SM2RAIN-ASCAT, and one reanalysis: ERA5-Land) are evaluated across CONUS. The study period is from 2010 to 2019, and the ground-based Stage IV precipitation dataset is used as the ground truth. By data mining 60 binary decision trees, the spatiotemporal pattern of errors and the land surface influences are analyzed.
 
Results indicate that IMERG and ERA5-Land perform better than SM2RAIN-ASCAT with higher accuracy and more stable interannual patterns for the ten years of data analyzed. The conventional bias evaluation finds that ERA5-Land and SM2RAIN-ASCAT underestimate in summer and winter, respectively. The decision tree method cross-assesses three spatiotemporal factors and finds that underestimation of ERA5-Land occurs in the eastern part of the rocky mountains, and SM2RAIN-ASCAT underestimates precipitation over high latitudes, especially in winter. Additionally, the decision tree method ascribes system errors to nine physical variables, of which the distance to the coast, soil type, and DEM are the three dominant features. On the other hand, the land cover classification and the topography position index are two relatively weak factors.

How to cite: Sui, X., Li, Z., Tang, G., Yang, Z.-L., and Niyogi, D.: A novel approach to systematically analyze the error structure of precipitation datasets using decision trees, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13345, https://doi.org/10.5194/egusphere-egu22-13345, 2022.

EGU22-693 | Presentations | GM2.8

An open-source Python package for DEM generation and landslide volume estimation based on Sentinel-1 imagery 

Lorena Abad, Daniel Hölbling, Zahra Dabiri, and Benjamin Robson

Landslide assessments require timely, accurate and comprehensive information, where Earth observation (EO) data such as optical and radar satellite imagery has played an important role. Volume estimates are important to understand landslide characteristics and (post-failure) behaviour. Pre- and post-event digital elevation model (DEM) differencing is a suitable method to estimate landslide volumes remotely, leveraging EO techniques. However, high costs for commercial DEM products, limited temporal and spatial coverage and resolution, or insufficient accuracy hamper the potential of this method. Sentinel-1 synthetic aperture radar (SAR) data from the European Union's Earth observation programme Copernicus opens the opportunity to leverage free EO data to generate multi-temporal topographic datasets.  

With the project SliDEM (Assessing the suitability of DEMs derived from Sentinel-1 for landslide volume estimation) we explore the potential of Sentinel-1 for the generation of DEMs for landslide assessment. Therefore, we develop a semi-automated and transferable workflow available through an open-source Python package. The package consists of different modules to 1) query Sentinel-1 image pairs that match a given geographical and temporal extent, and based on perpendicular and temporal baseline thresholds; 2) download and archive only suitable Sentinel-1 image pairs; 3) produce DEMs using interferometric SAR (InSAR) techniques available in the open-source Sentinel Application Platform (SNAP), as well as performing necessary post-processing such as terrain correction and co-registration; 4) perform DEM differencing of pre- and post-event DEMs to quantify landslide volumes; and 5) assess the accuracy and validate the DEMs and volume estimates against reference data.  

We evaluate and validate our workflow in terms of reliability, performance, reproducibility, and transferability over several major landslides in Austria and Norway. We distribute our work within a Docker container, which allows the usage of the SliDEM python package along with all its software dependencies in a structured and convenient way, reducing usability problems related to software versioning. The SliDEM workflow represents an important contribution to the field of natural hazard research by developing an open-source, low-cost, transferable, and semi-automated method for DEM generation and landslide volume estimation.  

How to cite: Abad, L., Hölbling, D., Dabiri, Z., and Robson, B.: An open-source Python package for DEM generation and landslide volume estimation based on Sentinel-1 imagery, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-693, https://doi.org/10.5194/egusphere-egu22-693, 2022.

Spaceborne digital elevation models (DEMs) are fundamental data for mapping and analyzing geomorphic features at regional and continental scale, but are limited by both their spatial resolution and accuracy. Typically, accuracy is measured using point- or profile-based geodetic measurements (e.g., sparse GNSS). We develop new methods to quantify the vertical uncertainty in spaceborne DEMs relevant to geomorphic analysis, focusing on the pixel-to-pixel variability internal to a given DEM, which we term the inter-pixel consistency. Importantly, the methods we develop are not based on external, geodetic measurements. Our codes are published open-source (https://github.com/UP-RS-ESP/DEM-Consistency-Metrics), and we particularly highlight a novel sun-angle rotation and hillshade-filtering approach that is based on the visual, qualitative assessment of DEM hillshades. Since our study area is in the arid Central Andes and contains diverse steep (volcano) and flat (salar) features, the environment is ideal for vegetation-free assessments of DEM quality across a range of topographic settings. We compare global 1 arcsec (~30 m) resolution DEMs (SRTM, ASTER, ALOS, TanDEM-X, Copernicus), and find high quality (high inter-pixel consistency) of the newest Copernicus DEM. At higher spatial resolution, we also seek to improve the stereo-processing of 3 m SPOT6 optical DEMs using the open-source AMES Stereo-Pipeline. This includes optimizing key parameters and processing steps, as well as developing metrics for DEM uncertainty masks based on the underlying image texture of the optical satellite scenes used to triangulate elevations. Although higher resolution spaceborne DEMs like SPOT6 are only available for limited spatial areas (depending on funds and processing power), the improvement in geomorphic feature identification and quantification at the hillslope scale is significant compared to 30 m datasets. Improved DEM quality metrics provide useful constraints on hazard assessment and geomorphic analysis for the Earth and other planetary bodies.

How to cite: Purinton, B. and Bookhagen, B.: DEM quality assessment and improvement in noise quantification for geomorphic application in steep mountainous terrain, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1191, https://doi.org/10.5194/egusphere-egu22-1191, 2022.

EGU22-2000 | Presentations | GM2.8

Assessment of soil erosion induced by different tillage practices through multi-temporal geomorphometric analyses 

Sara Cucchiaro, Laura Carretta, Paolo Nasta, Federico Cazorzi, Roberta Masin, Nunzio Romano, and Paolo Tarolli

One of the main environmental threats to sustainability and crop productivity in the agricultural sector is soil erosion. For the mitigation of this problem in agricultural fields, no-till management is considered a key approach. The measurement of soil erosion is particularly challenging, especially when surficial morphological changes are relatively small. Conventional experiments are commonly time-consuming and labour-intensive in terms of both field surveys and laboratory methods. However, the Structure from Motion (SfM) photogrammetry technique has enhanced the experimental activities by enabling the temporal evolution of soil erosion to be assessed through detailed micro-topography. This work presents a multitemporal quantification of soil erosion, using SfM through Uncrewed Aerial Vehicles (UAV) survey for understanding the evolution of no-till (NT) and conventional tillage (CT) in experimental plots. Considering that plot-scale soil surface (mm grid size) by several orders of magnitude, it was necessary to minimise SfM errors (e.g., co-registration and interpolation) in volumetric estimates to reduce noise as much as possible. Therefore, a methodological workflow was developed to analyse and identify the effectiveness of multi-temporal SfM-derived products, e.g. the conventional Difference of Digital Terrain Models (DoDs) and the less used Differences of Meshes (DoMs), for soil volume computations. To recognise the most suitable estimation method, the research validated the erosion volumetric changes calculated from the SfM outputs with the amount of soil directly collected through conventional runoff and sediment measurements in the field. This study presents a novel approach for using DoMs instead of DoDs to accurately describe the micro-topography changes and sediment dynamics. Another key and innovative aspect of this research, often overlooked in soil erosion studies, was to identify the contributing sediment surface, by delineating the channels potentially routing runoff directly to water collectors. The sediment paths and connected areas inside the plots were identified using a multi-temporal analysis of the sediment connectivity index for achieving the volumetric estimates. The DoM volume estimates showed better results with respect to DoDs and a mild overestimation compared to in-situ measurements. This difference was attributable to other factors (e.g., the soil compaction processes) or variables rather than to photogrammetric or geometric ones. The developed workflow enabled a very detailed quantification of soil erosion dynamics for assessing the mitigation effects of no-till management that can also be extended in the future to different scales with low-costs, based on SfM and UAV technologies.

How to cite: Cucchiaro, S., Carretta, L., Nasta, P., Cazorzi, F., Masin, R., Romano, N., and Tarolli, P.: Assessment of soil erosion induced by different tillage practices through multi-temporal geomorphometric analyses, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2000, https://doi.org/10.5194/egusphere-egu22-2000, 2022.

EGU22-2877 | Presentations | GM2.8

Coastal erosion: an overlooked source of sediments to the ocean. Europe as an example 

Vincent Regard, Mélody Prémaillon, Thomas Dewez, Sébastien Carretier, Catherine Jeandel, Yves Godderis, Stéphane Bonnet, Jacques Schott, Kevin Pedoja, Joseph Martinod, Jérôme Viers, and Sébastien Fabre

The eroding rocky coasts export sediment to the ocean, the amount of which is poorly known. At the global scale it could amounts 0.15-0.4 Gt/a (1). Recent evaluations of large retreat rates on monitored sections of sea cliffs indicate it can be comparable to the sediment input from medium to large rivers. We quantify rocky coast input to the ocean sediment budget at the European scale, the continent characterized by the best dataset.

The sediment budget from European rocky coasts has been computed from cliff lengths, heights and retreat rates. For that, we first compiled a large number of well-documented retreat rates; the analysis of whom showed that the retreat rates are at first order explained by cliff lithology (GlobR2C2, 2). Median erosion rates are 2.9 cm/a for hard rocks, 10 cm/a for medium rocks and 23 cm/a for weak rocks. These retreat rates were then applied to the European coast classification (EMODnet), giving the relative coast length for cliffs of various lithology types. Finally the cliff height comes from the EU-DEM (https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/elevation).

Due to data availability, we only worked on ~70% of the whole Europe, corresponding to a 127,000 km-long coastline (65,000 km of rocky coast). We calculated it originates 111±65 Mt/a, corresponding to 0.38 times the sediment input from rivers from the equivalent area (3.56 106 km2), calculated after Milliman and Farnsworth (3)’s database (290 Gt/a). A crude extrapolation to the 1.5 106 km-long Earth’s coastline reaches an amount of 0.6-2.4 Gt/a, an order of magnitude less that the sediment discharge from rivers (11-21 Gt/a, e.g., 3).

This up-to-now overlooked sedimentary source must further be explored for: (i) its effects on the geochemical ocean budget; (ii) the rising sea level control on the cliff retreat rates; and (iii) the characteristics and location of sediment deposition on ocean margins.

 

 

References

(1) Mahowald NM, Baker AR, Bergametti G, Brooks N, Duce RA, Jickells TD, Kubilay N, Prospero JM, Tegen I (2005). Atmospheric global dust cycle and iron inputs to the ocean: ATMOSPHERIC IRON DEPOSITION. Global Biogeochemical Cycles 19. DOI: 10.1029/2004GB002402

(2) Prémaillon M, Regard V, Dewez TJB, Auda Y (2018). How to explain variations in sea cliff erosion rates? Insights from a literature synthesis. Earth Surface Dynamics Discussions:1–29. DOI: https://doi.org/10.5194/esurf-2018-12

(3) Milliman J, Farnsworth K (2011). River Discharge to the Coastal Ocean: A Global Synthesis. Cambridge University Press

 

How to cite: Regard, V., Prémaillon, M., Dewez, T., Carretier, S., Jeandel, C., Godderis, Y., Bonnet, S., Schott, J., Pedoja, K., Martinod, J., Viers, J., and Fabre, S.: Coastal erosion: an overlooked source of sediments to the ocean. Europe as an example, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2877, https://doi.org/10.5194/egusphere-egu22-2877, 2022.

EGU22-3002 | Presentations | GM2.8

Prototype of a deep learning workflow to map dunes in the Kalahari 

Maike Nowatzki, Richard Bailey, and David Thomas

Linear dunes show a wide variety of morphometrical patterns; their sizes, spacing, defect density, and orientations differ not only between but also within dunefields (Thomas 1986; Bullard et al. 1995; Hesse 2011; Hugenholtz et al. 2012). The first step towards characterising dune patterning is to accurately and precisely map dunefields, which is challenging, especially when dunefields are too large to be mapped manually. Thus, (semi-)automatic approaches have been brought forward (Telfer et al. 2015; Shumack et al. 2020; Bryant & Baddock 2021). Here, we are presenting the prototype of a deep learning workflow that allows for the automated mapping of large linear dunefields through semantic segmentation.

The algorithm includes the following components: 1) the download of satellite imagery; 2) pre-processing of training and prediction data; 3) training of a Neural Network; and 4) applying the trained Neural Network to classify satellite imagery into dune and non-dune pixels. The workflow is python-based and uses the deep learning API keras as well as a variety of spatial analysis libraries such as earthengine and rasterio.

A case study to apply and test the algorithm’s performance was conducted on Sentinel-2 satellite imagery (10 m spatial resolution) of the southwest Kalahari Desert. The resulting predictions are promising, despite the small amount of data the model was trained on.

The presented prototype is work in progress. Further developments will include parameter optimisation, exploring ways to improve the objectiveness of training data, and the conduction of case studies applying the algorithm to digital elevation rasters.

How to cite: Nowatzki, M., Bailey, R., and Thomas, D.: Prototype of a deep learning workflow to map dunes in the Kalahari, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3002, https://doi.org/10.5194/egusphere-egu22-3002, 2022.

EGU22-3781 | Presentations | GM2.8

Automatic detection of pit-mound topography from LiDAR based DEMs 

Janusz Godziek and Łukasz Pawlik

Pit-and-mound (treethrow, windthrow) topography is a result of tree uprooting caused by the impact of hurricane-speed wind events. Analyzing its location and morphometric features can improve our knowledge about the influence of winds on forest ecosystem dynamics and on changes in the forest floor microrelief. This is important in terms of hillslope denudation and soil evolution.

The occurrence and evolution of pit-mound topography can be studied with the use of high-resolution elevation data. Such data can be obtained from LiDAR (Light Detection and Ranging) surveys. Polish Institute of Geodesy and Cartography carried the LiDAR survey in the years 2010-2015. Point cloud data for the entire area of Poland with the minimal density of 4 points per m2 is currently available on the Internet.

Under the present project, we have analyzed Digital Elevation Models (DEMs) produced from the above-mentioned LiDAR data in order to develop and test a new method for automatic detection of pit-mound topography. As far as we know, no such method exists at the moment. We generated DEMs with 0.5 m spatial resolution for three study sites with the confirmed occurrence of pit-mound topography, located in Southern Poland. A script with the method was written in the R programming language.

The proposed method is based on contour lines. We found that the detection of pit and mound topography formed on gentle hillslopes is possible when closed contours are delineated. Detected forms can be classified into “pits” and “mounds” by investigating point positions with the highest and the lowest elevation within the closed contour. On the other hand, for steep surfaces pit-mound topography can be detected by calculating distances between contours and selecting slope segments with between-contours distances above a certain threshold value. This leads to the identification of gently-sloped areas within the study site. With a high probability, such areas indicate places, where pit-mound topography was formed. To validate our methods, we performed the on-screen assessment of DEMs for the presence of forms that could be interpreted as pit-mound topography.

The study has been supported by the Polish National Science Centre (project no 2019/35/O/ST10/00032).

How to cite: Godziek, J. and Pawlik, Ł.: Automatic detection of pit-mound topography from LiDAR based DEMs, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3781, https://doi.org/10.5194/egusphere-egu22-3781, 2022.

EGU22-4765 | Presentations | GM2.8

A new, multi-scale mapping approach for reconstructing the flow evolution of the Fennoscandian Ice Sheet using high-resolution digital elevation models. 

Frances E. G. Butcher, Anna L. C. Hughes, Jeremy C. Ely, Christopher D. Clark, Emma L. M. Lewington, Benjamin M. Boyes, Alex C. Scoffield, Stephen Howcutt, and Thomas P. F. Dowling

Data-driven reconstructions of palaeo-ice sheets based on their landform records are required for validation and improvement of numerical ice sheet models. In turn, such models can be used to better predict the future responses of the Antarctic and Greenland ice sheets to climate change. We are exploiting the recent expansion in availability and coverage of very-high-resolution (1–2 m) digital elevation models (DEMs) within the domain of the former Fennoscandian Ice Sheet to reconstruct its flow pattern evolution from the glacial landform record.

The Fennoscandian Ice Sheet reached its maximum extent at 21–20 ka. Previous data-driven reconstructions over the whole ice sheet domain (encompassing Fennoscandia, northern continental Europe and western Russia) have necessarily relied upon landform mapping from relatively coarse-resolution (decametre-scale) data, predominantly from satellite images and aerial photographs. However, high-resolution (1–2 m/pixel resolution) LiDAR DEMs have recently become available over a large portion of the ice sheet domain above contemporary sea level. This reveals previously unobserved assemblages of landforms which record past ice sheet flow, including fine-scale cross-cutting and superposition relationships between landforms. These observations are likely to reveal previously unidentified complexity in the flow evolution of the ice sheet. However, the richness of the data available over such a large area amplifies labour-intensity challenges of data-driven whole-ice-sheet reconstructions; it is not possible to map every flow-related landform (or even a majority of the landforms) manually in a timely manner. We therefore present a new multi-scale sampling approach for systematic and comprehensive ice-sheet-scale mapping, which aims to overcome the data-richness challenge while maintaining rigor and providing informative data products for model-data comparisons.

We present in-progress mapping products covering Finland, Norway and Sweden produced using our new multi-scale sampling approach. The products include mapping of >200 000 subglacial bedforms and bedform fields, and a summary map of ‘landform linkages’. Landform linkages summarise the detailed landform mapping but do not extrapolate over large distances between observed landforms. Thus, they provide a reduced data product that is useful for regional-scale flow reconstruction and model-data comparisons and remains closely tied to landform observations. The landform linkages will be reduced further into longer interpretative flowlines, which we will then use to generate ‘flowsets’ describing discrete ice flow patterns within the ice sheet. We will use cross-cutting relationships observed in the detailed landform mapping to ascribe a relative chronology to overlapping flowsets where relevant. We will then combine the flowsets into a new reconstruction of the flow pattern evolution of the ice sheet.

How to cite: Butcher, F. E. G., Hughes, A. L. C., Ely, J. C., Clark, C. D., Lewington, E. L. M., Boyes, B. M., Scoffield, A. C., Howcutt, S., and Dowling, T. P. F.: A new, multi-scale mapping approach for reconstructing the flow evolution of the Fennoscandian Ice Sheet using high-resolution digital elevation models., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4765, https://doi.org/10.5194/egusphere-egu22-4765, 2022.

EGU22-5872 | Presentations | GM2.8 | Highlight

Kinematic patterns of tectonic displacements in the Blue Clay outcrops along the eastern border of the Bradanic Trough (Southern Italy) from DTM data processing 

Giuseppe Spilotro, Gioacchino Francesco Andriani, Giuseppe Di Prizio, Katia Decaro, Alessandro Parisi, and Maria Dolores Fidelibus

The Bradanic Trough (Southern Italy) is the Pliocene-present-day south Apennines foredeep. It is filled by a thick Pliocene to Pleistocene sedimentary succession constituted by hemipelagites (Blue Clay Fm.) in the lower part, and coarse grained deposits (sands and conglomerates) in the upper part, shaped in marine or continental terraced environment.

On the eastern border of the Bradanic Trough along the Murgian Plateau (Apulia, Italy) numerous morphological lineaments are associated with sequential lowering and rotation of the surface, aligned with the carbonate substrate dip direction.

These morphologies have been interpreted so far as erosion products; their association with medium-deep water circulations and surface phenomena, like mud volcanoes, now allows their interpretation as a lumped mass, detached and tilted along shear surfaces.

The surface patterns of such surfaces may be easily detected for the presence, at some distance, of a quite similar twin track, which overlaps with good agreement.

The numerical analysis of the tracks extracted from accurate DTMs allows us to reconstruct the kinematic patterns of the tectonic displacement (distance of the detachment; rotation; angle of the shear plane). This type of analysis might reveal very useful in some fields of engineering geology, such as underground works, and for interpreting many hydrogeological phenomena within the study area. Finally, the correct 3D representation of the detached masses helps to identify the true causes of the direct faulting, which is not always linked to the tectonics, not active in the concerned regions.

How to cite: Spilotro, G., Andriani, G. F., Di Prizio, G., Decaro, K., Parisi, A., and Fidelibus, M. D.: Kinematic patterns of tectonic displacements in the Blue Clay outcrops along the eastern border of the Bradanic Trough (Southern Italy) from DTM data processing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5872, https://doi.org/10.5194/egusphere-egu22-5872, 2022.

EGU22-5990 | Presentations | GM2.8

Geomorphometry of the deep Gulf of Mexico 

Vincent Lecours

The Gulf of Mexico is characterized by a high geodiversity that influences hydrodynamics patterns and drives biological and human uses of the seafloor. In 2017, the United States Bureau of Ocean Energy Management released a 1.4-billion-pixel bathymetric dataset of the deep northern Gulf of Mexico, with a pixel size of about 12m. The computational power required to analyze this dataset has limited its use so far. Here, geomorphometry was used to characterize the seafloor of the deep northern Gulf of Mexico at multiple spatial resolutions. Flat areas and slopes cover more than 70% of the studied area, yet thousands of smaller morphological features like peaks and pits were identified. Spatial comparisons confirmed that analyses at different spatial scales capture different features. A composite product combining seafloor classification at multiple scales helped highlight the dominant seafloor features and the scale at which they are best captured.

How to cite: Lecours, V.: Geomorphometry of the deep Gulf of Mexico, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5990, https://doi.org/10.5194/egusphere-egu22-5990, 2022.

EGU22-6152 | Presentations | GM2.8 | Highlight

Quantifying the morphometry and drainage patterns of composite volcanoes: A comparison of the Japanese and Indonesian volcanic arcs   

Roos M. J. van Wees, Daniel O'Hara, Pablo Grosse, Gabor Kereszturi, Pierre Lahitte, and Matthieu Kervyn

The long-term (ka to ma) degradation of a volcanic edifice is controlled by both regional (e.g., climate, tectonics) and local factors (e.g., original morphology, lithology), resulting in both long-lasting weathering and river incision and short-term hazardous events, such as flank collapses and lahars. Trends among the morphometry of stratovolcanoes, their drainage network, denudation, and regional factors were recently characterised for composite volcanoes along the Indonesian arc. Denudation was shown to be negatively correlated with drainage density; the across-arc variations expose a tectonic control on the level of denudation and volcanoes’ irregularity. This study applies the same method on age-constrained volcanoes in Japan to find coherent trends between arcs despite the different local and regional factors. We aim to better understand the factors that control erosion rates and patterns, and the evolutionary phases of volcano degradation.       

We first compile a dataset of 35 singular, non-complex composite volcanoes with known eruption ages and spatially spread throughout the Japanese Island arc system. Using 30m TanDEM-X Digital Elevation Models, morphologies, and drainage metrics (e.g., volume, height, slopes, irregularity index, Hack’s Law exponent, and drainage density) are extracted for each volcano, using the MORVOLC algorithm adapted in MATLAB as well as the newly developed DrainageVolc algorithm. Correlations between the morphometric parameters and potential controlling factors (e.g., age, climate, lithology, and tectonics) are analysed to determine quantitative relationships of edifice degradation throughout the arc. Finally, we compare relationships and correlation values of the Japanese Arc system to those from the Indonesian Arc.   

The analysis shows that volcano age is positively correlated with irregularity and negatively correlated with height and volume. From the drainage parameters, we find that basins become wider and merge, resulting in lower drainage densities. The variation in erosion rates along the Japanese arc provides evidence for the degree of climatic control on the volcano degradation. The between-arc comparison shows which trends are susceptible to arc-scale variations and highlights consistent trends that have the potential to be extrapolated to other volcanic arcs and be used as a relative age determination tool for composite volcanoes.

How to cite: van Wees, R. M. J., O'Hara, D., Grosse, P., Kereszturi, G., Lahitte, P., and Kervyn, M.: Quantifying the morphometry and drainage patterns of composite volcanoes: A comparison of the Japanese and Indonesian volcanic arcs  , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6152, https://doi.org/10.5194/egusphere-egu22-6152, 2022.

The Augšdaugava spillway valley located in SE Latvia has a system of river terraces formed by both glacio-fluvial and fluvial processes. The flight of terraces forms a staircase-like relief in the riverine landscape and represents the evidence of valley evolution during the transition from glacial to post-glacial conditions in this region. Hence terraces are substantial ‘archives’ of paleoenvironmental data and their geomorphometry could provide key information for untangling geomorphological history of the spillway valley. Hence the need for precise identification and mapping of terraces is obvious. However, these landforms, particularly upper terraces commonly are poorly preserved. It is a result of the interplay of many geological processes – channel incision, lateral erosion in the course of the river Daugava meandering, mass wasting etc., leaving discontinuous remnants of terraces along to the present-day long profile of the river. Previously, mapping of these features was performed via extensive field surveys and to some extent by interpretation of aerial images or topographic maps, because the presence of tree cover hinders the identification of terraces by conventional geomorphological techniques. Thereby due to the poor preservation of fluvial landforms and the abundant vegetation cover, the previously mapped terrace surfaces and inferred levels may be questionable.

Yet the now available high-resolution LiDAR data in Latvia and application of modern GIS-based techniques offer an opportunity to resolve these problems. Hence the main goal of the study was to apply a methodology based on using LiDAR-derived DEM and combining different semi-automated GIS analysis tools for the identification, mapping and morphometric analysis of fluvial terraces in the valley. In this study, LiDAR data coverage (courtesy of the Latvian Geospatial Information Agency) was used to generate a DEM. LiDAR coverage consists of 317 data folders in *.LAS format, each one of 1 km2 extent. DEM with 0.5 x 0.5 m pixel resolution and <15 cm vertical accuracy was created by ArcGIS PRO tool ‘LAS Dataset to Raster’ following the standard procedure of the IDW interpolation. After the construction of DEM, the TerEx toolbox integrated into the ArcGIS environment was used for the extraction and delineation of terrace surfaces. After the completion of GIS works, the ground-truthing of the obtained data on the location of fluvial terraces was performed during field geomorphological reconnaissance.

DEM analysis allowed to identify the terrace sequence in the Augšdaugava spillway valley consisting of eight different terrace levels – T1 to T8. From the applied methodology, authors were able to delineate surfaces of river terraces in those parts of the valley, where in the course of previous research terraces were interpreted incorrectly or even not identified at all. However, only terraces T1 and T2 can only be unambiguously identified by GIS-based extraction. Upper terraces with smoothened edges due to mass wasting and surfaces dissected by gullies are not easily recognizable. Hence, the presence of minor landforms which increase the topographical roughness of the surface directly influences the quality of extracted data, thus leading to the necessity of an extensive amount of manual editing.

How to cite: Soms, J. and Vorslavs, V.: Identification, GIS-based mapping and morphometric analysis of river terraces from airborne LiDAR data in the Augšdaugava spillway valley, South-eastern Latvia, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6177, https://doi.org/10.5194/egusphere-egu22-6177, 2022.

EGU22-6681 | Presentations | GM2.8

Automated tools for identifying bankfull river channel extents: developing and comparing objective and machine-learning methods 

Kathryn Russell, Jonathan Garber, Karen Thompson, Jasper Kunapo, Matthew Burns, and Geordie Zhang

Bankfull channel dimensions are of fundamental importance in fluvial geomorphology, to describe the geomorphic character of a river, as inputs to models which explain variations in morphology through time and space, and as initial processing steps in more detailed morphometric techniques. With ever-increasing availability of high-resolution elevation data (e.g. LiDAR), manual delineation of channel extents is a bottleneck which limits the geomorphic insights that can be gained from that data.

We developed and tested two automated channel delineation methods that define bankfull according to different criteria and thus reflect different conceptualisations of bankfull extent: (1) a cross-sectional method (termed HydXS) that identified the elevation which maximises hydraulic depth (cross-section area/wetted width); and (2) a neural network image segmentation model trained on images derived from a LiDAR digital elevation model.

HydXS outperformed the neural network method overall, but the two methods were comparable in larger streams (> 20 m bankfull width; Dice coefficient ~0.85). Prediction accuracy of HydXS was generally high (overall precision 89%; recall 81%), performing well even in small streams (bankfull width ~ 10 m). HydXS performed worst in incised and recovering stream sections (precision 93%; recall 64%) where the choice between macro-channel and inset channel was somewhat arbitrary (both for the algorithm and manual delineation). The neural network outperformed HydXS where an inset channel was present. The neural network method performed worst in small streams and where other features (e.g. road embankments, small ditches) were misclassified as channels. Neural network performance was improved markedly by trimming the area of interest to a 100-m wide buffer along the stream, eliminating many areas prone to misclassification.

The two methods provide different ways to effectively leverage high-resolution LiDAR datasets to gain information about channel morphology. These methods are a significant step forward as they can delineate bankfull elevation, as well as bankfull width, and operate using morphology alone. HydXS is an objective method that doesn’t require training, can be run on consumer-level hardware, and can perform well in small streams, but requires manual work to develop the necessary spatial framework of an accurate channel centerline. The neural network model is a promising method to delineate larger channels (>20 m wide) without requiring detailed centerline or cross-section data, given adequate training data for the stream type of interest (i.e. expert-delineated bankfull channel extents). We envisage that further improvement of the neural network method is possible by scaling the input image extents to catchment area, and training on a larger dataset from multiple regions to increase generalizability. 

How to cite: Russell, K., Garber, J., Thompson, K., Kunapo, J., Burns, M., and Zhang, G.: Automated tools for identifying bankfull river channel extents: developing and comparing objective and machine-learning methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6681, https://doi.org/10.5194/egusphere-egu22-6681, 2022.

Despite the long record of applications and the well-known theoretical framework, geostatistical based image/surface texture tools have still not gained a wide diffusion in the context of geomorphometric analysis, even for the evaluation of surface roughness. Many geomorphometric studies dealing with various aspect of surface roughness use well-known approaches based on vector dispersion of normals to surface or even the popular Topographic Ruggedness Index. In many comparative studies on roughness metrics, geostatistical approaches are cited but not tested; in other studies, geostatistical approaches are tested using algorithms not adapted to the analysis of morphometric data. In remote sensing, geostatistical approaches are more popular, even if there is not a consensus on which are the most suited metrics for computing image texture indices. In metrology of manufactured surfaces, equipped by various industrial standards for surface texture measurements, approaches based on autocorrelation are widely adopted.  However, “natural” surfaces and related morphogenetic factors are much more complex than manufactured surfaces and ad-hoc concepts and algorithms should be devised. This presentation is mainly focused on topographic surface analysis, but the considerations and results are applicable also in the context of image analysis. This presentation aims to clarify some aspects of the geostatistical methodologies, highlighting the effectiveness and flexibility in the context of multiscale and directional evaluation of surface texture. In doing this, the connections with other methodologies and concepts related to spatial data analysis are highlighted. Finally, it is introduced a simplified algorithm for computing surface roughness indices, which does not require the preliminary detrending of the input DEM.

 

References

ATKINSON, P.M. and LEWIS, P., 2000. Geostatistical classification for remote sensing: An introduction. Computers and Geosciences, 26(4), pp. 361-371.

BALAGUER, A., RUIZ, L.A., HERMOSILLA, T. and RECIO, J.A., 2010. Definition of a comprehensive set of texture semivariogram features and their evaluation for object-oriented image classification. Computers and Geosciences, 36(2), pp. 231-240.

GUTH, P.L., 2001. Quantifying terrain fabric in digital elevation models. GSA Reviews in Engineering Geology, 14, pp. 13-25.

HERZFELD, U.C. and HIGGINSON, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22(1), pp. 35-41.

TREVISANI, S., CAVALLI, M. and MARCHI, L., 2009. Variogram maps from LiDAR data as fingerprints of surface morphology on scree slopes. Natural Hazards and Earth System Science, 9(1), pp. 129-133.

TREVISANI, S., CAVALLI, M. and MARCHI, L., 2012. Surface texture analysis of a high-resolution DTM: Interpreting an alpine basin. Geomorphology, 161-162, pp. 26-39.

TREVISANI, S. and ROCCA, M., 2015. MAD: Robust image texture analysis for applications in high resolution geomorphometry. Computers and Geosciences, 81, pp. 78-92.

TREVISANI, S. and CAVALLI, M., 2016. Topography-based flow-directional roughness: Potential and challenges. Earth Surface Dynamics, 4(2), pp. 343-358.

 

How to cite: Trevisani, S.: Returning to geostatistical-based analysis of image/surface texture: from generalization to a basic one-click short-range surface roughness algorithm, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6924, https://doi.org/10.5194/egusphere-egu22-6924, 2022.

EGU22-7860 | Presentations | GM2.8

The Application of Relief Models for Environmental Solutions: Review 

Linda Grinberga, Armands Celms, Krisjanis Sietins, Toms Lidumnieks, Miks Brinkmanis-Brimanis, and Jolanta Luksa

With the development of remote sensing technologies the application of different geospatial models in research has become increasingly important. Terrain relief is the difference in elevation between the high and low points of a land surface, that is, the change in the height of the ground over the area. Terrain relative relief (or elevation) is the relative difference in elevation between a morphological feature and those features surrounding it (e.g. height difference between a peak and surrounding peaks, a depression and surrounding depressions etc.). Together with terrain morphology, ppland other terrain attributes, it is useful for describing how the terrain affects intertidal and subtidal processes.

 Appropriate decision-making tools are required for urban and rural planning, design and management. The usage of DEM (Digital Elevation Model), DSM (Digital Surface Model) and DTM (Digital Terrain Model) helps researchers and designers to analyse issues connected with drainage, geology, earth crust movements, sound and radio-wave distribution, wind effects, exposure to sun, etc. Analysis of the future scenarios of geospatial models has an essential role in the field of water management and various environmental topics. This research aims to focus on the environmental issues in a context of water quality and hydrology.

How to cite: Grinberga, L., Celms, A., Sietins, K., Lidumnieks, T., Brinkmanis-Brimanis, M., and Luksa, J.: The Application of Relief Models for Environmental Solutions: Review, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7860, https://doi.org/10.5194/egusphere-egu22-7860, 2022.

EGU22-8728 | Presentations | GM2.8

Mapping of natural and artificial channel networks in forested landscapes using LiDAR data to guide effective ecosystem management 

Siddhartho Shekhar Paul, Eliza M. Hasselquist, William Lidberg, and Anneli M. Ågren

High-resolution Light Detection and Ranging (LiDAR) data provide unique opportunities for landscape-scale mapping of hydrological features. LiDAR-derived digital elevation models are particularly valuable for identifying channel networks in densely forested landscapes, where satellite imagery-based mapping approaches are challenged by forest canopies. Artificial drainage practices have caused widespread alteration of northern landscapes of Europe and North America which likely have had significant impacts on hydrological connectivity and ecosystem functioning. However, these artificial channels are rarely considered in ecosystem management and poorly represented in existing geomorphological datasets. In this study, we conducted a landscape-scale analysis across 11 selected study regions in Sweden using LiDAR data for the virtual reconstruction of artificial drainage ditches to understand the extent of their ecological impacts.

We utilized a 0.5 m resolution digital elevation model for mapping natural channel heads and artificial ditches across the study regions. We also implemented a unique approach by back-filling ditches in the current digital elevation model to recreate the prehistoric landscape. This enabled us to map and model the channel networks of prehistoric (natural) and current (drained) landscapes. We found that 58% of the prehistoric natural channels had been converted to ditches. Moreover, the average channel density increased from 1.33 km km‑2 in the prehistoric landscape to 4.66 km km-2 in the current landscape, indicating substantial ditching activities in the study regions.

Our study highlights the need for accurate delineation of natural and artificial channel networks in northern landscapes for effective ecosystem restoration and management. We presented an innovative technique for comparing the channel networks between the prehistoric natural landscape and current modified landscape by integrating advanced LiDAR data, extensive manual digitization, and modeling; a highly suitable combination for channel network mapping in dense forest landscapes. The developed methodology can be implemented in any landscape for understanding the extent of human modification of natural channel networks to guide future environmental management activities and policy formulation.

How to cite: Paul, S. S., Hasselquist, E. M., Lidberg, W., and Ågren, A. M.: Mapping of natural and artificial channel networks in forested landscapes using LiDAR data to guide effective ecosystem management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8728, https://doi.org/10.5194/egusphere-egu22-8728, 2022.

EGU22-9650 | Presentations | GM2.8

Geodiversity as a key component for the evaluation of urban biodiversity 

Martina Burnelli, Massimiliano Alvioli, Laura Melelli, and Alessia Pica

Ecodiversity stems from the interaction between the biosphere and the geosphere, and it is one of the necessary conditions for achieving a sustainable planet. Thus, the relationship between geodiversity and biodiversity should be clearly defined. The  relationship between climate and topography in roughened mountain areas at low-latitudes, as constrains for the high values of biodiversity, has already been established. As a consequence, topography is the first and most important input parameter for investigating the connections between abiotic and biotic variety. Spatial analysis in a GIS framework is the key approach to better understand the role of topographic and hydrographic variables in evaluating geodiversity (geomorphodiversity) .

In this paper we focused on analyzing urban areas, where in 2030 60% of the world's population is expected to live. A science of cities is the future challenge for Earth Sciences: urban geomorphology could be the key to have a complete overview on the abiotic and biotic parameters in sustainable cities. To achieve this aim, the conservation of urban biodiversity is fundamental. Analysing the correlation between substantial geodiversity and biodiversity may be a guideline for science of cities and for designing and managing sustainable urban areas.

These ideas, if transposed in an urban context, should go beyond morphometric analysis of topography and take into account anthropogenic features and natural landforms modified by humans in time.  To this end, geomorphological mapping is fundamental to calibrate the quantitative models in a truly multidisciplinary approach to a science of cities and urban biodiversity. We consider our contribution as a new model for the analysis of geodiversity in urban areas.

How to cite: Burnelli, M., Alvioli, M., Melelli, L., and Pica, A.: Geodiversity as a key component for the evaluation of urban biodiversity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9650, https://doi.org/10.5194/egusphere-egu22-9650, 2022.

EGU22-10469 | Presentations | GM2.8

Automatic detection of rock outcrops on vegetated and moderately cultivated areas 

Réka Pogácsás and Gáspár Albert

State-of-the-art applications in various earth science domains shows that different classification methods are playing an increasingly important role in mapping due to their improving accuracy. However, in the field of geological mapping, the exclusive use of morphometric and spectral indices in classification models are still often considered as subsidiary mapping tools. This is particularly true in areas where the surface is covered by vegetation and the soil layer is relatively thick, since in such places geological structures can only be observed at first hand at rock outcrops. The aim of our research is to investigate the automatic mapping of rock outcrops in the Dorog Basin in Hungary, where outdated geological maps are currently being updated. In this research, we applied the random forest classification combined with a wider range of input data including satellite imagery and ecosystem information.

The Dorog Basin, located in northern central Hungary, has a medium-density settlement network, with built-up and cultivated areas alternating with areas of wooded or scrub-covered terrain with rugged topography. The region is tectonically fragmented, where former fluvial erosion is of great importance. In several cases the Mesozoic carbonates, Paleogene limestones or limnic coal sequences outcrop the Quaternary sediments resulting a diverse, although a well identifiable surface. In the 86.86 km2 study area, the input of the model included 14 morphometrical raster layers derived from SRTM-1, six raster layers with mineral indices derived from Sentinel II, and one ecosystem layer [1], all set to a uniform ~25m resolution. To test the performance of random forest classification in modelling pre-Quaternary formations, we applied two different approaches. In the first one, we used conventional training areas to model pre-Quaternary outcrops, as well as we modelled the physical characteristics of the surface formations. Whereas in the second one, we modelled the pre-Quaternary outcrops and physical characteristics of the surface formations by using randomly selected zones on the study area with around 6000-10000 random training polygons. The randomly generated training polygons were circles of about 1-2 pixels in size around points.  The training areas were derived from the former geological map of the Dorog Basin [2]. The importance of input parameters were also observed for further use. A six-fold cross-validation of the selected training areas showed that the two methods were equally accurate, but the automatic processing of randomly selected training areas was faster.

Based on the modelling results, the pre-Quaternary rock outcrops of the area can be determined with at least 80% confidence using random forest classification. These results will be used in future field mapping, which will also provide a field validation of the method.

From the part of G.A. financial support was provided from the NRDI Fund of Hungary, Thematic Excellence Programme no. TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.

[1] Ecosystem Map of Hungary. DOI: 10.34811/osz.alapterkep

[2] Gidai, L., Nagy, G., & Sipass, S. (1981). Geological map of the Dorog Basin 1: 25 000. [in Hungarian] Geological Institute of Hungary, Budapest.

How to cite: Pogácsás, R. and Albert, G.: Automatic detection of rock outcrops on vegetated and moderately cultivated areas, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10469, https://doi.org/10.5194/egusphere-egu22-10469, 2022.

EGU22-10675 | Presentations | GM2.8

Response of a small mountain river to a sediment pulse tracked using sub-canopy UAV surveys 

Conor McDowell, Helm Carina, Reid David A., and Hassan Marwan

Remotely piloted aircrafts (UAVs) and Structure-from-Motion photogrammetry (SfM) have become a widely used approach for producing high-resolution topographical measurements of river systems. This approach has the benefit of capturing data over large spatial scales while requiring little time in the field. In small, forested rivers, the dense canopy has hindered the use of remote sensing techniques, limiting topographic data collection to more time-consuming and lower-resolution methods. This complicates monitoring the response of these systems to individual floods, as in many situations there is not enough time to complete more time-consuming surveys between events.

In this study, we pilot the use of sub-canopy UAV surveys (flown at 1-3 m altitude) to monitor the response of a small mountain stream (1-3 m wide) in British Columbia to a sediment pulse generated by the removal of an upstream culvert. Using eleven surveys flown over a three-year period, we track the downstream propagation of the pulse and the subsequent responses in bed topography and roughness along the 240 m reach. We observe a “build-and-carve” response of the channel, where some channel segments aggrade during the first floods after pulse generation, whereas others undergo little morphologic activity. In subsequent floods, these aggradational segments rework through the carving of well-defined channels that release this aggraded sediment downstream. These “build-and-carve” segments serve as temporary storage reservoirs that caused the pulse to fragment as it progressed downstream. The locations of these storage reservoirs were set by the initial channel morphology and the movement of in-stream wood and debris. This study highlights the importance of temporary sediment storage reservoirs for fluvial morphodynamics and provides some insights and suggestions for the future monitoring of forested river systems using sub-canopy drone surveys.

How to cite: McDowell, C., Carina, H., David A., R., and Marwan, H.: Response of a small mountain river to a sediment pulse tracked using sub-canopy UAV surveys, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10675, https://doi.org/10.5194/egusphere-egu22-10675, 2022.

EGU22-11010 | Presentations | GM2.8

InSAR phase unwrapping using Graph neural networks 

Anshita Srivastava, Ashutosh Tiwari, Avadh Bihari Narayan, and Onkar Dikshit

Advancements in processing strategies of time series interferometric synthetic aperture radar (InSAR) has resulted in improved deformation monitoring and DEM generation. Both of the applications use phase unwrapping, which involves finding and adding the unknown correct number of phase cycles to the wrapped phase. It is an inverse process of recovering the absolute phase from the wrapped phase, and the objective is to remove the 2π-multiple ambiguity. Ideally, it could be achieved by addition or subtraction of 2π at each pixel depending on the phase difference between the neighboring pixels. The problem appears effortless but brings challenges due to noise and inconsistencies. The conventional methods require improvements in terms of accurately estimating the unknown number of phase cycles and dealing with phase jumps. Recently, deep learning methods have been used extensively in the domain of remote sensing to solve complex image processing problems such as object detection and localization, image classification, etc. Since all the pixels in a stack of interferograms are not used in unwrapping, and the pixels used are scattered irregularly, modeling the unwrapping problem as an image classification problem is infeasible. In this work, we deploy Graph Neural Networks (GNNs), a class of deep learning methods designed to infer information from input graphs to solve the unwrapping problem. Phase unwrapping can be posed as a node classification problem using GNN, where each pixel is treated as a node. The method is aimed to exploit the capability of GNNs in correctly predicting the phase count of each pixel. The proposed work aims to improve the computational efficiency and accuracy of the unwrapping process, resulting in reliable estimation of displacement.

How to cite: Srivastava, A., Tiwari, A., Narayan, A. B., and Dikshit, O.: InSAR phase unwrapping using Graph neural networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11010, https://doi.org/10.5194/egusphere-egu22-11010, 2022.

Understanding the mechanism of fault rupture is important to minimize earthquake damage and to estimate the impacts of future earthquakes. In this study, we observed surface displacements caused by the Hovsgol earthquake (Mw 6.7) in January 2021 using three Differential Interferometric SAR (DInSAR) pairs of Sentinel-1B at descending node and ALOS-2 at ascending and descending nodes, and then estimated the source parameters of the earthquake by the inversion of the observed displacement fields. The maximum surface displacement in the radar look direction was 21 cm at the Sentinel-1 descending node, and 32 cm and 26 cm at the ALOS-2 ascending and descending node, respectively. All differential interferograms showed three fringe patterns near the epicenter, which suggests that there were three rupture planes with different slips. We performed the inversion modeling of the DInSAR-observed surface displacements assuming three rupture planes with different slip magnitudes and directions. The values of normalized root mean square error (NRMSE) between the modelled and observed displacements were smaller than 4% for all DInSAR observations. The spatial distribution of modelled displacements was matched to the observed one. The source parameters of fault estimated by the inversion were closely consistent with the measurements by United States Geological Survey and Global Centroid Moment Tensor. The inversion results demonstrated that the assumption of our inversion modeling (three rupture planes) is reasonable.

How to cite: Kim, T. and Han, H.: Source parameters of the 2021 Hovsgol earthquake (Mw 6.7) in Mongolia estimated by using Sentinel-1 and ALOS-2 DInSAR, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11152, https://doi.org/10.5194/egusphere-egu22-11152, 2022.

 

1.    INTRODUCTION

Monitoring land use and land cover change (LULCC) is one of the best methods to understand the interactive changes of agriculture, climate change, and ecological dynamics. In eastern Asia, Taiwan is characterized by high population density, rich biodiversity, and complex terrain. However, recent climate change has impacted the people and ecosystems in Taiwan.  Therefore, we applied landscape metrics and the deep learning U-net semantic segmentation model to enhance the remote sensing images based LULCC monitoring efficiency and take a case study in suburban areas of central Taiwan, a place that plays an important economic role in Taiwan occupied with intensive agricultural activities.

2.    METHOD

This study focuses on six townships in Nantou County in Central Taiwan, where the major agricultural products are rice, tea, and fruit. We obtained four dates of Sentinel-2 images in February for 2018 and 2021 and classified the landscape into five classes: agricultural, forest, built-up, free water bodies, and bare land. The spectral bands information (Blue, Green, Red, NIR), the normalized difference vegetation index (NDVI), and soil-adjusted vegetation index (SAVI) were obtained for establishing the deep learning U-net semantic segmentation model. The accuracy and the loss function of the training model results are 0.89 and 0.02, respectively. In addition, the ground truth data was consulted with the official land-use classification information and the high spatial resolution imagery in Google Earth Pro. Finally, we analysed the classified images' results to detail the study area's changing trajectory to explore the complex spatiotemporal landscape patterns.

3.    RESULTS AND DISCUSSIONS

According to the result, the forest area on the eastern side accounts for more than 70% of the study area. The construction area and the agricultural area have an upward trend during the research period (16% and 5%); in addition, except for the number of patches in free water bodies decreased, all other categories had an upward trend, especially the construction and agricultural area are the largest. The Shannon's Evenness Index reflects that all patches are evenly distributed in space and the area-weighted average fractal dimension index decreases reflecting possible influences of anthropogenic activities. Thus, the results indicate an increasing level of fragmentation, supported by the decrease of the area-weighted average fractal dimension index. In conclusion, using satellite imagery with the deep learning U-net semantic segmentation model can sufficiently discern a detailed LULCC. Furthermore, with the combination of landscape matrix information, the interactions between humans and the environment can be understood better quantitatively.

References

Huete, A. R., Hua, G., Qi, J., Chehbouni, A., & Van Leeuwen, W. J. D., 1992: Normalization of multidirectional red and NIR reflectances with the SAVI. Remote Sensing of Environment, 41(2-3), 143-154.

Ronneberger, O., Fischer, P., & Brox, T., 2015: U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.

Rouse, J. W., Haas, R. H., Schell, J. A., Deering, D., Deering, W. 1973: Monitoring vegetation systems in the Great Plains with ERTS, ERTS Third Symposium, NASA SP-351 I, pp. 309-317.

How to cite: Zhuang, Z.-H. and Tsai, H. P.: Application of Deep Learning Model to LULCC Monitoring using Remote Sensing Images-A case study in suburban areas of central Taiwan, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11764, https://doi.org/10.5194/egusphere-egu22-11764, 2022.

EGU22-12041 | Presentations | GM2.8

Newly-Born Sand Dunes of Lake Urmia: Assessing Migration Rate and Morphodynamic Changes Using Remote Sensing Techniques and Field Studies 

Hesam Ahmady-Birgani, Parisa Ravan, Zhengyi Yao, and Gabriela Mihaela Afrasinei

To enhance the understanding of aeolian landforms and their processes, the assessment of origin, migration and evolution of newly-born sand dunes is vital. In this regard, Lake Urmia, in NW Iran, was considered as a representative case study, given that it has lost approximately two-thirds of its water volume in the past two decades and, consequently, the newly-born sand ridges and sand dunes on its western shores were formed. The emerging sand dunes are located close to the villages, adjacent to the agricultural and farmlands, international transit road, and industrial zone, encompassing the whole area. The present study aims to assess the sand dunes’ origin and their migration both in speed and direction in the past decade.

To understand the questions above, remote sensing techniques and in-field studies were coupled. Therefore, wind data from the closest meteorological station were employed to calculate the wind rose, drift potential (DP), the resultant drift potential (RDP), and the resultant drift direction (RDD) across the region. Change detection techniques using high-resolution satellite images were chosen to detect the migration rate and morpho-dynamic changes of Lake Urmia sand dunes. To classify the geomorphological features and land uses in the region, a hybrid supervised classification approach including a customised decision tree classifier was used to distinguish sand dune units from other signatures. Using the minimum bounding geometry method, feature classes were created. These feature classes represent the length, width, and orientation of sand dunes, retrieved after the image classification process. Also, fieldwork surveying was carried out on the sixteen sand dunes in different periods to measure the morphological and evolutionary changes.

 As the wind results show, the trend of DP parameters between the years 2006-2009 and the years 2015-2020, the percentage of wind speeds above the threshold velocity (V>Vt%) to DP has significant gaps, suggestive of weaker winds in those periods. However, between the years 2009-2015, the V>Vt% and DP values are corresponding and coequal. This indicates that the most erosive and shifting winds are between 2009-2015, with the weakest wind power in tails. Moreover, the annual variability of DPt is well correlated with Lake Urmia water level changes; but there is no correlation between the DPt and precipitation amount. The evaluation of image processing results depicted that after 2003, the area of sand dunes had dramatically increased. On average, the smallest area belongs to 2010 (287.3 m2), and the largest area is for years 2019 (775.96 m2), 2018 (739.08 m2), and 2017 (739.74 m2). In addition, between the years 2010 and 2014, a significant increase in area of the sand dunes from 287.25 to 662.8 m2 was observed. The migration rate is the highest between 2010 and 2015, with the lowest values before 2010 and after 2015.

The results of this study have broad implications in the context of sustainable development and climate-related challenges, ecosystem management and policy-making for regions with sand dune challenges, hence crucial insights can be gained by coupling remote sensing techniques and in-situ studies.

How to cite: Ahmady-Birgani, H., Ravan, P., Yao, Z., and Afrasinei, G. M.: Newly-Born Sand Dunes of Lake Urmia: Assessing Migration Rate and Morphodynamic Changes Using Remote Sensing Techniques and Field Studies, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12041, https://doi.org/10.5194/egusphere-egu22-12041, 2022.

The supervised mapping of landforms last years got high levels based on classic classification methods and new artificial intelligence techniques. However, it is often difficult to create train data for large and diverse areas, and we can face up with differences between expert-to-expert landforms interpretation. It can be solve using unsupervised classification - a less effective in general case, but more objective. The way to make more effective classification - to create special input variables (to account local specificity of landforms) aimed to show real terrain structure. Study region - Yamal Peninsula (Arctic coast of Russia), covered sea accumulative and erosional plains, reshaped by some cryogenic processes, especially thermokarst, with many lake hollows. We used ArcticDEM 32m and decomposition of DEM with 2D FFT by moving windows with sequence of sizes from 1.5 to 3 km (by the interval of 0.3 km) and with lag around 150 m (overlapping - 90-95 %). The 9 variables were computed: 1) magnitude of the main wave in the height field, 2) wavelength of the main wave, 3) importance (share of the height variation) of the fix pool of biggest harmonic waves, 4-6) orthogonal (N-S and W-E) components of the general direction of the height fluctuations (and the significance of the direction), 7-9) coefficients of the exponential trend equation for approximation wave's frequencies/magnitudes distribution. We then trained the model of landforms clustering for the study area using Kohonen network and the hierarchic clustering was used for additional generalization. The medium-scale (750 m / pix, it is matched to maps at the scale 1:500 000 - 1:1 000 000) map of Yamal Peninsula landforms was created. Seven classes of landforms were recognized. The study was supported by Russian Science Foundation (project no. 19-77-10036).

How to cite: Kharchenko, S.: Medium-scale unsupervised landform mapping of the Yamal Peninsula (Russia) using 2D Fourier decomposition of the ArcticDEM, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12383, https://doi.org/10.5194/egusphere-egu22-12383, 2022.

The delineation of geomorphometrical objects that can be translated to geomorphological features is one of the most practical aspects of geomorphometry. The concave (closed depressions) or convex features (mounds) are often important to be delineated from multiple points of view: theoretical approaches, planning for practical purposes, or various other aspects. In this work, I have approached sinkholes and burial mounds as representative cases of concave and convex features represented on high-resolution DEMs. Based on manual delineations, several algorithms of object-based delineation were tested for accuracy. The interest was in delineating as much as accurate possible the targeted features. Further, the segments were fed to a multilayer perceptron for the classification of the delineated segments. The results show promising accuracy in regard to both types of features.

How to cite: Niculiță, M.: Machine learning and geomorphometrical objects for convex and concave geomorphological features detection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1853, https://doi.org/10.5194/egusphere-egu22-1853, 2022.

EGU22-5587 * | Presentations | GM2.3 | Highlight

Comparative analysis of the Copernicus (30 m), TanDEM-X (12 m) and UAV-SfM (0.2 m) DEM to estimate gully volumes and mobilization rates in central Madagascar 

Liesa Brosens, Benjamin Campforts, Gerard Govers, Emilien Aldana-Jague, Vao Fenotiana Razanamahandry, Tantely Razafimbelo, Tovonarivo Rafolisy, and Liesbet Jacobs

Over the past decades advanced technology has become available, revolutionizing the assessment of surface topography. At smaller scales (up to a few km²) structure from motion (SfM) algorithms applied to uncrewed aerial vehicle (UAV) imagery now allow sub-meter resolution. On the other hand, spaceborne digital elevation models (DEMs) are becoming increasingly accurate and are available at a global scale. Two recent spaceborne developments are the 12 m TanDEM-X and 30 m Copernicus DEMs. While sub-meter resolution UAV-SfM DEMs generally serve as a reference, their acquisition remains time-consuming and spatially constrained. However, some applications in geomorphology, such as the estimation of regional or national erosion quantities of specific landforms, require data over large areas. TanDEM-X and Copernicus data can be applied at such scales, but this raises the question of how much accuracy is lost because of the lower spatial resolution.

Here, we evaluate the performance of the 12 m TanDEM-X DEM and the 30 m Copernicus DEM to i) estimate gully volumes, ii) establish an area-volume relationship, and iii) determine sediment mobilization rates, through comparison with a higher resolution (0.2 m) UAV-SfM DEM. We did this for six study areas in central Madagascar where lavaka (large gullies) are omnipresent and surface area changes over the period 1949-2010s are available. Copernicus derived lavaka volume estimates were systematically too low, indicating that the Copernicus DEM is not suitable to estimate erosion volumes for geomorphic features at the lavaka scale (100 – 105 m²). The relatively coarser resolution of the DEM prevents to accurately capture complex topography and smaller geomorphic features. Lavaka volumes obtained from the TanDEM-X DEM were similar to UAV-SfM volumes for the largest features, while smaller features were generally underestimated. To deal with this bias we introduce a breakpoint analysis to eliminate volume reconstructions that suffered from processing errors as evidenced by significant fractions of negative volumes. This elimination allowed the establishment of an area-volume relationship for the TanDEM-X data with fitted coefficients within the 95% confidence interval of the UAV-SfM relationship. Combined with surface area changes over the period 1949-2010s, our calibrated area-volume relationship enabled us to obtain lavaka mobilization rates ranging between 18 ± 3 and 311 ± 82 t ha-1 yr-1 for the six study areas, with an average of 108 ± 26 t ha-1 yr-1. This does not only show that the Malagasy highlands are currently rapidly eroding by lavaka, but also that lavaka erosion is spatially variable, requiring the assessment of a large area in order to obtain a meaningful estimate of the average erosion rate.

With this study we demonstrate that medium-resolution global DEMs can be used to accurately estimate the volumes of gullies exceeding 800 m² in size, where the proposed breakpoint-method can be applied without requiring the availability of a higher resolution DEM. This might aid geomorphologists to quantify sediment mobilisation rates by highly variable processes such as gully erosion or landsliding at the regional scale, as illustrated by our first assessment of regional lavaka mobilization rates in the central highlands of Madagascar.

How to cite: Brosens, L., Campforts, B., Govers, G., Aldana-Jague, E., Razanamahandry, V. F., Razafimbelo, T., Rafolisy, T., and Jacobs, L.: Comparative analysis of the Copernicus (30 m), TanDEM-X (12 m) and UAV-SfM (0.2 m) DEM to estimate gully volumes and mobilization rates in central Madagascar, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5587, https://doi.org/10.5194/egusphere-egu22-5587, 2022.

The concept of terrain visibility is vast and hard to summarise in a single definition. It can be generically said that it is a property that measures how observable a territory is from a single or multiple points of view. 

The estimation or calculation of visibility indices has been used in multiple fields, including architecture, archaeology, communications, tourism, land planning, and military applications. Recently (Meinhardt et al., 2015, Bornaetxea et al., 2018, Knevels et al., 2020, ) the concept of viewshed, i.e. the geographical area that is visible from one or more points of view, has been called into play for applications involving geomorphology.  In particular, it has been used to identify the portions of territory in which existing landslide inventories, carried out through field surveys, can be considered valuable for the calculation of landslide susceptibility. The aim is to delineate the Effective Surveyed Area, i.e. the area that has actually been observed by the operators in the field. 

However, this purely geometric approach cannot guarantee that objects are actually visible just because they are in a direct line-of-sight relationship with the observer. Due to their size and/or orientation in space, they may be (i) poorly or not at all detectable and/or (ii) observable from only a few viewpoints.    

For this reason we have developed r.survey (Bornaetxea & Marchesini, 2021), a plugin (Python script) for GRASS GIS, which allows to simulate (i) from how many observation points each point of the territory is visible, (ii) from which point of observation each point of the territory is most effectively visible, (iii) whether an object of a specific size can be detected. Concerning, in particular, the last element, r.survey calculates the solid angle subtended by a circle of equivalent dimensions to those of the object to be surveyed and assumed to be lying on the territory, oriented according to the slope and aspect derived from a digital terrain model. The solid angle provides a continuous measure of the visibility of the object sought, which can be compared with typical values of a human visual acuity. What happens then is that the concept of 'Effective Surveyed Area' can be reworked into the more accurate 'Size-specific Effective Surveyed Area' (SsESA). The new concept makes it possible to identify those portions of territory in which, during fieldwork, it is possible to observe objects of equal or greater size than those of interest, also considering their orientation in space with respect to the observer. 

The code of r.survey, which is based on the libraries and modules of GRASS GIS and was written to exploit multi-core processing, is open source and available for downloading (https://doi.org/10.5281/zenodo.3993140) together with a manual and some example data.

How to cite: Marchesini, I. and Bornaetxea, T.: r.survey: a tool to assess whether elements of specific sizes can be visually detected during field surveys, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5715, https://doi.org/10.5194/egusphere-egu22-5715, 2022.

EGU22-8456 | Presentations | GM2.3

Sediment connectivity assessment through a geomorphometric approach: a review of recent applications 

Marco Cavalli, Stefano Crema, Sara Cucchiaro, Giorgia Macchi, Sebastiano Trevisani, and Lorenzo Marchi

Sediment connectivity, defined as the degree to which a system facilitates the transfer of sediment through itself by means of coupling relationships between its components, has recently emerged as a paramount property of geomorphic systems. The growing interest of the earth sciences community in connectivity led this property to become a key concept concerning sediment transfer processes analysis and one of the building blocks of modern geomorphology. The increasing availability of high-resolution Digital Elevation Models (DEMs) from different sources as LiDAR and Structure from Motion (SfM) paved the way to quantitative and semi-quantitative approaches for assessing sediment connectivity. A geomorphometric index of sediment connectivity, based on DEM derivatives as drainage area, slope, flow length and surface roughness, has been developed along with related freeware software tool (SedInConnect). The index aims at depicting spatial connectivity patterns at the catchment scale to support the assessment of the contribution of a given part of the catchment as sediment source and define sediment transfer paths. The increasing interest in the quantitative characterization of the linkages between landscape units and the straightforward applicability of this index resulted in numerous applications in different contexts. This work presents and discusses the main applications of the sediment connectivity index along with a recent application in the frame of the Interreg ITAT3032 SedInOut Project (2019-2022). Being a topography-based index, it is focused on structural aspects of connectivity, and quality and resolution of DEMs may have a significant impact on the results. Future development should consider process-based connectivity and incorporate temporal variability directly into the index. Moreover, this work demonstrates that, when carefully applied considering the intrinsic limitations of the topographic-based approach, the index can rapidly provide a spatial characterization of sediment dynamics, thus improving the understanding of geomorphic system behavior and, consequently, hazard and risk assessment.

How to cite: Cavalli, M., Crema, S., Cucchiaro, S., Macchi, G., Trevisani, S., and Marchi, L.: Sediment connectivity assessment through a geomorphometric approach: a review of recent applications, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8456, https://doi.org/10.5194/egusphere-egu22-8456, 2022.

EGU22-8994 | Presentations | GM2.3 | Highlight

FABDEM - A 30m global map of elevation with forests and buildings removed 

Peter Uhe, Laurence Hawker, Luntadila Paulo, Jeison Sosa, Christopher Sampson, and Jeffrey Neal

Digital Elevation Models (DEMs) depict the elevation of the Earth’s surface and are fundamental to many applications, particularly in the geosciences. To date, global DEMs contain building and forest artifacts that limit its functionality for applications that require precise measurement of terrain elevation, such as flood inundation modeling. Using machine learning techniques, we remove both building and tree height bias from the recently published Copernicus GLO-30 DEM to create a new dataset called FABDEM (Forest And Buildings removed Copernicus DEM). This new dataset is available at 1 arc second grid spacing (~30m) between 60°S-80°N, and is the first global DEM to remove both buildings and trees.

Our correction algorithm is trained on a comprehensive and unique set of reference elevation data from 12 countries that covers a wide range of climate zones and urban types. This results in a wider applicability compared to previous DEM correction studies trained on data from a single country. As a result, we reduce mean absolute vertical error from 5.15m to 2.88m in forested areas, and from 1.61m to 1.12m in built-up areas, compared to Copernicus GLO-30 DEM. Further statistical and visual comparisons to other global DEMs suggests FABDEM is the most accurate global DEM with median errors ranging from -0.11m to 0.45m for the different landcover types assessed. The biggest improvements were found in areas of dense canopy coverage (>50%), with FABDEM having a median error of 0.45m compared to 2.95m in MERIT DEM and 12.95m for Copernicus GLO-30 DEM.

FABDEM has notable improvements over existing global DEMs, resulting from the use of Copernicus GLO-30 and a powerful machine learning correction of building and tree bias. As such, there will be beneifts in using FABDEM for purposes where depiction of the bare-earth terrain is required, such as in applications in geomorphology, glaciology and hydrology.

How to cite: Uhe, P., Hawker, L., Paulo, L., Sosa, J., Sampson, C., and Neal, J.: FABDEM - A 30m global map of elevation with forests and buildings removed, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8994, https://doi.org/10.5194/egusphere-egu22-8994, 2022.

Stream morphology is an important indicator for revealing the geomorphological features and evolution of the Yangtze River. Existing studies on the morphology of the Yangtze River focus on planar features. However, the vertical features are also important. Vertical features mainly control the flow ability and erosion intensity. Furthermore, traditional studies often focus on a few stream profiles in the Yangtze River. However, stream profiles are linked together by runoff nodes, thus affecting the geomorphological evolution of the Yangtze River naturally. In this study, a clustering method of stream profiles in the Yangtze River is proposed by plotting all profiles together. Then, a stream evolution index is used to investigate the geomorphological features of the stream profile clusters to reveal the evolution of the Yangtze River. Based on the stream profile clusters, the erosion base of the Yangtze River generally changes from steep to gentle from the upper reaches to the lower reaches, and the evolution degree of the stream changes from low to high. The asymmetric distribution of knickpoints in the Han River Basin supports the view that the boundary of the eastward growth of the Tibetan Plateau has reached the vicinity of the Daba Mountain.

How to cite: Zhao, F. and Xiong, L.: Clustering stream profiles to understand the geomorphological features and evolution of the Yangtze River by using DEMS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13121, https://doi.org/10.5194/egusphere-egu22-13121, 2022.

Land surface curvature (LSC) is a basic attribute of topography and influences local effects of gravitational energy and surface material transport. However, the calculation of LSCs based on triangulated irregular networks (TINs) has not been fully studied, which restricts further geoscience studies based on TIN digital elevation models (DEMs). The triangular facets and vertices of a TIN are both expressions of the land surface; therefore, based on their adjacency relationship, the LSCs can be calculated. In this study, we propose a mathematical vector framework to enhance LSC system theory. In this framework, LSC can be calculated based on both triangular facets and vertices, and the selection of weighting methods in the framework is flexible. We use the concept of the curvature tensor to interpret and calculate the commonly used LSC, which provides a new perspective in geoscience research. We also investigate the capacity of the TIN-based method to perform LSCs calculations and compare it with grid-based methods. Based on a mathematically simulated surface, we reach the following conclusions. First, the TIN-based method has similar effects on the scale to the grid-based methods of EVANS and ZEVENBERGEN. Second, the TIN-based method is less error sensitive than the grid-based methods by the EVANS and ZEVENBERGEN polynomials for the high error DEMs. Third, the shape of the TIN triangles exerts a great influence on the LSCs calculation, which means that the accuracy of LSCs calculation can be further improved with the optimized TIN but will be discontinuous. Based on three real landforms with different data sources, we discuss the possible applications of the TIN-based method, e.g., the classification of land surface concavity–convexity and hillslope units. We find that the TIN-based method can produce visually better classification results than the grid-based method. This qualitative comparison reflects the potential of using TINs in multiscale geoscience research and the capacity of the proposed TIN-based LSC calculation methods. Our proposed mathematical vector framework for LSCs calculations from TINs is a preliminary approach to mitigate the multiple-scale problem in geoscience. In addition, this research integrates mathematical vector and geographic theories and provides an important reference for geoscience research.

 

How to cite: Hu, G., Xiong, L., and Tang, G.: Mathematical vector framework for gravity-specific land surface curvatures calculation from triangulated irregular networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13122, https://doi.org/10.5194/egusphere-egu22-13122, 2022.

EGU22-13124 | Presentations | GM2.3

Integrating topographic knowledge into deep learning for the void-filling of digital elevation models 

Sijin Li, Liyang Xiong, and Guoan Tang

Digital elevation models (DEMs) contain some of the most important data for providing terrain information and supporting environmental analyses. However, the applications of DEMs are significantly limited by data voids, which are commonly found in regions with rugged terrain. We propose a novel deep learning-based strategy called a topographic knowledge-constrained conditional generative adversarial network (TKCGAN) to fill data voids in DEMs. Shuttle Radar Topography Mission (SRTM) data with spatial resolutions of 3 and 1 arc-seconds are used in experiments to demonstrate the applicability of the TKCGAN. Qualitative topographic knowledge of valleys and ridges is transformed into new loss functions that can be applied in deep learning-based algorithms and constrain the training process. The results show that the TKCGAN outperforms other common methods in filling voids and improves the elevation and surface slope accuracy of the reconstruction results. The performance of TKCGAN is stable in the test areas and reduces the error in the regions with medium and high surface slopes. Furthermore, the analysis of profiles indicates that the TKCGAN achieves better performance according to a visual inspection and quantitative comparison. In addition, the proposed strategy can be applied to DEMs with different resolutions. This work is an endeavour to transform perceptive topographic knowledge into computer-processable rules and benefits future research related to terrain reconstruction and modelling.

How to cite: Li, S., Xiong, L., and Tang, G.: Integrating topographic knowledge into deep learning for the void-filling of digital elevation models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13124, https://doi.org/10.5194/egusphere-egu22-13124, 2022.

EGU22-13129 | Presentations | GM2.3

Research on texture features for typical sand dunes using multi-source data 

Junfei Ma, Fayuan Li, Lulu Liu, Jianhua Cheng, and Guoan Tang

Deserts have obvious textural features. In detail, different types of sand dunes have significant differences in their morphological texture features. Existing studies on desert texture have mainly focused on extracting dune ridges or sand ripples using remote sensing images. However, comprehensive understanding of desert texture at multiple scales and quantitative representation of texture features are lacking. Our study area is in the Badain Jaran Desert. Four typical sand dunes in this desert are selected, namely, starlike chain megadune, barchans chain, compound chain dune, and schuppen chain megadune. Based on Sentinel-2 and ASTER 30m DEM data, the macroscopic and microscopic texture features of the desert are extracted using positive and negative topography, edge detection and local binary pattern (LBP) methods, respectively. Eight texture indexes based on gray level co-occurrence matrix(GLCM) are calculated for the original data and the abstract texture data respectivelyThen these texture parameters are clustered based on the result of Spearman correlation. Finally, the coefficient of variation is used to determine representative indicators for each cluster in order to construct a geomorphological texture information spectrum library of typical dune types. The results show that the macroscopic and microscopic texture features of the same type of sand dunes have high similarity. And geomorphological texture information spectrum can well distinguish different types of sand dunes by curve features.

How to cite: Ma, J., Li, F., Liu, L., Cheng, J., and Tang, G.: Research on texture features for typical sand dunes using multi-source data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13129, https://doi.org/10.5194/egusphere-egu22-13129, 2022.

EGU22-13130 | Presentations | GM2.3

Regional differences in gully network connectivity based on graph theory: a case study on the Loess Plateau, China 

Jianhua Cheng, Lanhua Luo, Fayuan Li, and Lulu Liu

Gullies are some of the areas with the most frequent material exchanges in loess landforms. By studying the influence of the spatial structure of gully networks on material transport and describing the difficulty of material transport from sources to sinks, it is of great significance to understand the development and evolution of loess landforms. This study is based on graph theory and digital terrain analysis and describes the relationship between gully networks and terrain feature elements via a gully network graph model. The adjacency matrix of the gully network graph model is constructed to quantify the connectivity. Taking six typical small watershed sample areas of the Loess Plateau as the research objects, the changes in the gully network connectivity characteristics in different loess geomorphic areas are analyzed from the aspects of overall network connectivity and node connectivity. The results show that (1) From Shenmu to Chunhua (the sample areas from north to south), the average values of the gully network edge weights first decrease and then increase. The maximum value is 0.253 in the Shenmu sample area, and the minimum value is 0.093 in the Yanchuan sample area. These values show that as the gully development increases, the greater the capacity of the gully network to transport materials is, and the less resistance the material receives during the transfer process. (2) The average node strength reaches the minimum in the Yanchuan sample area, and from Yanchuan to the north and south sides, it gradually increases. It can be concluded that the overall connectivity of the gully network shows a gradually weakening trend from the Yanchuan sample area to the north and south sides. (3) The potential flow (Fi) and network structural connectivity index (NSC) show similar characteristic changes; from north to south, the connectivity of nodes from the Shenmu to Yanchuan sample areas gradually increases, and from the Yanchuan to Chunhua sample areas, it gradually weakens. The accessibility from source to sink (Shi) shows the opposite trend. At the same time, the connectivity index values of the gully network nodes in the six typical areas all show clustered spatial distribution characteristics. (4) By comparing the results of the connectivity indicators calculated by the Euclidian distance used in the previous study and the sediment transport capacity index used in this study and by comparing the variation in the gully network quantitative indicators and the gully network connectivity indicators, this comparison result indicates the rationality of connectivity indicators in this paper. The connectivity of the gully network contains abundant and important information on the development and evolution of loess gullies. Research on the connectivity of the gully network will help deepen the understanding of the evolution process and mechanism of loess gullies.

How to cite: Cheng, J., Luo, L., Li, F., and Liu, L.: Regional differences in gully network connectivity based on graph theory: a case study on the Loess Plateau, China, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13130, https://doi.org/10.5194/egusphere-egu22-13130, 2022.

EGU22-13131 | Presentations | GM2.3

Morphological characteristics and evolution model of loess gully cross section 

Lulu Liu, Fayuan Li, Xue Yang, and Jianhua Cheng

Gully morphology is an important part of loess geomorphology research. Along with gully development, the variation of its cross section is the most important aspect, and it can intuitively reflect the characteristics of the lateral widening of the gully slope. Therefore, in-depth research of the variation of the cross-sectional morphology of the gully is important to understanding the development process of the loess gully. Based on the data of nine periods of an indoor simulated loess small watershed, this paper deeply studies the evolution model of a complete branch ditch in the watershed from many aspects by using the theory and method of digital terrain analysis. Firstly, we analyse the morphological characteristics of the gully cross section in the simulated small watershed. The test shows that with the development of the gully, the average slope of the slope decreases continuously, and the slope morphology is mostly a concave slope along the slope direction. The degree of downward concave first increases and then gradually tends to be gentle. The gully erosion mode is gradually transformed from downward cutting erosion to lateral erosion. The more mature the gully development, the lower the depth of gully bottom cutting is compared with the width of gully widening. Furthermore, the surface cutting depth tends to be stable and the slope is stable. Then, the transformation law of the slope morphology of the gully cross section with the development of the gully is studied, and the prediction model of the transformation of the slope morphology of the gully cross section is established by using the Markov chain. The Markov model can better reflect the dynamic change of the slope morphology of the gully cross section, which is of considerable importance to revealing the external performance and internal mechanism of the gully morphology.

How to cite: Liu, L., Li, F., Yang, X., and Cheng, J.: Morphological characteristics and evolution model of loess gully cross section, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13131, https://doi.org/10.5194/egusphere-egu22-13131, 2022.

In an area experienced a strong earthquake, the formation of clusters of seismic cracks is considered related to susceptibility to post-seismic slides. However, the relationship between crack distribution and the occurrence of post-seismic slides has rarely been evaluated. This study developed an index representing the spatial density of seismic cracks (dense crack index: DCI) for the area where post-seismic slides were identified after the 2016 Kumamoto earthquake (Mw 7.0). The susceptibility of post-seismic slides was then assessed using models that incorporated the weight of evidence (WoE) and random forest (RF) methods, with the DCI as a conditioning factor. Both the models confirmed the importance of the DCI, although the improvement in model performance as indicated by area under the curve values was marginal or negligible by including the index. This was largely because the combination of features that indicated where open cracks were likely to occur, or ridgelines where seismic waves were prone to be amplified, could compensate for the absence of the index. The contribution of the DCI could be improved if more accurate LiDAR data were used in the analysis.

How to cite: Kasai, M. and Yamaguchi, S.: Assessment of post-seismic landslide susceptibility using an index representative of seismic cracks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13239, https://doi.org/10.5194/egusphere-egu22-13239, 2022.

EGU22-13325 | Presentations | GM2.3

Evaluating Geomorphometric Variables to Identify Groundwater Potential Zones in Sahel-Doukkala, Morocco 

Adnane Habib, Abdelaziz El Arabi, and Kamal Labbassi

Topography and geology are considered the primary factors influencing groundwater flow and accumulation. To evaluate their potential in identifying groundwater potential, an integrated approach was provided and used in this work to delineate groundwater potential zones in Sahel-Doukkala, Morocco, by combining geomorphometric variables and a Multi-Criteria Evaluation (MCE) technique. Aside from lithology, all variables used in this approach were derived from a 10 m Digital Elevation Model (DEM) generated from ALOS-PRISM stereo-images using photogrammetric techniques. The chosen variables were considered to be very closely associated with groundwater circulation and accumulation, namely lithology, topographic wetness index (TWI), convergence index (CI), lineament density, lineament intersection density, and drainage network. These variables were given weights based on their respective importance in the occurrence of groundwater, by using a cumulative effect matrix. This process has shown that lineament density had the most effects on other variables, with the biggest weight (24%), followed by lineament intersection density (20%). TWI and CI succeeded 16% while lithology and drainage network density had the least weight (12%). Later, in a GIS system, an MCE based weight sum method was used for generating the groundwater potential zones map.

The obtained map was classified into three zones, viz. “poor”, “moderate” and “high”. These zones delineate areas where the subsurface has varying degrees of potential to store water and also indicate the availability of groundwater. It was found that the zone with “high” potential covered an area of approximately 714 km2 (44 % of the study area), and it identified areas that are suitable for groundwater storage. These zones showed a high association with low drainage density, low TWI values, and a high density of lineaments and lineament intersections. The groundwater potential zones map produced by the proposed approach was verified using the location and groundwater level depth of 325 existing wells that were categorized as successful, and the result was found satisfactory, with 91% of the successful exiting wells were located at zones that fall in the “moderate” and “high” areas. In addition, the validity of the proposed approach was tested according to the groundwater level depth, which indicates the actual groundwater potential. It was found that places with "high" potential have an average groundwater level depth of approximately 27 m, whereas areas with “moderate” and “poor” potential showed an average of 31 m and 37 m, respectively. The validation results show a good agreement between existing groundwater wells and the obtained groundwater potential zones map and were considered to be reasonable. Therefore, the produced map can be of great help to hydrogeologists to detect, with time and cost-effectively, new zones that may carry a high groundwater potential.

Because DEM data is one of the most widely and easily accessible data, the proposed method is well suited for areas where data is scarce. As result, it can be widely used to develop conceptual models based on geomorphometric variables as primary inputs for similar arid and semi-arid regions suffering from data scarcity.

How to cite: Habib, A., El Arabi, A., and Labbassi, K.: Evaluating Geomorphometric Variables to Identify Groundwater Potential Zones in Sahel-Doukkala, Morocco, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13325, https://doi.org/10.5194/egusphere-egu22-13325, 2022.

EGU22-13343 | Presentations | GM2.3

A scale-independent model for the analysis of geomorphodiversity index 

Laura Melelli, Martina Burnelli, and Massimiliano Alvioli

The World Urbanization Prospects (ONU) estimates that within 2050 about 70% of the world's population will live in urban areas. The use of GIS and spatial analysis are essential tools for proper land use planning, which takes into account the geomorphological characteristics of the territory, as the starting point for the safeguard of urban ecosystems.

Several geological and environmental approaches have been proposed, albeit they usually lack a new objective, quantitative and scale independent model. At variance with common approaches, recently a new geomorphodiversity index was proposed which aims at an objective classification of joint geological, hydrological, biotic and ... features, in Italy.

In this work, we show results of a study performed in urban areas in Italy, where we apply systematic spatial analysis for the identification of the geomorphodiversity index. The approach proposed a quantitative assessment of topographic features (i.e., slope and landforms classification) is a spatial analysis in GRASS GIS through the use of geomorphon method and additional morphometric quantities. We aim at the definition of a new scale-independent approach, analyzing all of the morphometric quantities calculated at different scales (i.e., within moving windows of different sizes). We shown that scale- and model-independent selection of such features is possible for most of the considered quantities.

We argue that our work is relevant for the objective selection of quantities to define a geomorphodiversity index, and its calculation in  areas of arbitrary size and geomorphological properties, provided the same input data is available.

How to cite: Melelli, L., Burnelli, M., and Alvioli, M.: A scale-independent model for the analysis of geomorphodiversity index, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13343, https://doi.org/10.5194/egusphere-egu22-13343, 2022.

ESSI2 – Data, Software and Computing Infrastructures Across Earth and Space Sciences

EGU22-2145 | Presentations | ESSI2.3

25 years of the IPCC Data Distribution Centre at the German Climate Computing Center (DKRZ) 

Martina Stockhause and Michael Lautenschlager

The Data Distribution Centre (DDC) of the Intergovernmental Panel on Climate Change (IPCC) celebrates its 25th anniversary in 2022. DKRZ is the last remaining founding member among the DDC Partners. The contribution looks back on the past 25 years of the DDC at DKRZ from its establishment to the present. It shows which the milestones have been introduced in the areas of data management and data standardization, e.g. 

  • the NetCDF/CF data standard,
  • the DataCite data DOI assignment enabling data citation,  
  • the data preservation and stewardship standards of the World Data System (WDS), 
  • the Earth System Grid Federation (ESGF) as data infrastructure standard, or 
  • the IPCC FAIR Guidelines for the current 6th Assessment Report (AR6). 

In addition to the continuous effort to adopt new standards and curate the data holdings, current challenges - technical and organizational - and possible future directions are discussed. The most difficult of the challenges remains the long-term strategy for sustainable DDC services as a part of an increasingly interoperable data service environment, which is technically described in the FAIR digital object framework and which is contentwise guided by the UN Sustainable Development Goal 13 on climate action. 

(http://ipcc-data.org; http://ipcc.wdc-climate.de)

How to cite: Stockhause, M. and Lautenschlager, M.: 25 years of the IPCC Data Distribution Centre at the German Climate Computing Center (DKRZ), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2145, https://doi.org/10.5194/egusphere-egu22-2145, 2022.

EGU22-3261 | Presentations | ESSI2.3

Reimagining the AuScope Virtual Research Environment Through Human-Centred Design 

Jens Klump, Ulrich Engelke, Vincent Fazio, Pavel Golodoniuc, Lesley Wyborn, and Tim Rawling

AuScope, founded in 2006, is the provider of research infrastructure to Australia’s  Earth and geospatial science community. Its unifying strategic goals include building the Downward Looking Telescope (DLT) (a metaphor for an integrated system of Earth and geospatial instruments, services, data and analytics to enable scientists to understand Earth’s evolution through time) and exploring how Earth resources may support growing human demands. The AuScope Virtual Research Environment (AVRE) program is responsible for enabling the DLT through providing persistent access to required data and tools from a diverse range of Australian research organisations, government geological surveys and the international community.

In 2009 AuScope released a portal to provide online access to evolved data products to specific groups of users. Subsequently, this portal was combined with online tools to create the AVRE platform of specialised Virtual Laboratories that enabled the execution of explicit workflows. By 2021 it was recognised that AVRE should modernise and take advantage of new technologies that could empower researchers to access higher storage capacities and wider varieties of computational processing options. AVRE also needed to leverage notebooks, containerisation and mobile solutions and facilitate a greater emphasis on ML and AI techniques. Increased storage meant researchers could access less processed, rawer forms of data, which they could then prepare for their own specific requirements, whilst the growth in Open Source software meant easy access to tools that could meet or efficiently be adapted to their needs. 

Recognising that AuScope researchers now required new mechanisms to help them find and reuse multiple resources from globally distributed sites and be able to integrate these with their own data types and tools, the AVRE informatics and technology experts began assessing the requirements for modernising the AVRE platform. The technologists reviewed other virtual research environments, research data portals, and e-commerce platforms for examples of well-designed interfaces and services that help users get the best use out of a platform. 

We then undertook a series of interactive consultations across a broad range of AuScope researchers (geophysics, geochemistry, geospatial, geology, etc). We accepted there were multiple requirements, from simple data processing on small volume data sets through to complex data modelling and assimilation at petascale, and openly acknowledged that there were numerous ways of processing: one size would not fit all.

In the consultations, we focussed on the context that AVRE was about enabling researchers to use a diversity of resources to realise the AuScope strategic goal of the DLT. We recognised that this would require an ability to meet the specialised requirements of a broad range of the current individual AuScope geoscience programs, but at the same time, there was a need to allow for future integration with global transdisciplinary challenges that explore how Earth resources may support growing human demands.

In this presentation, we will discuss the outcomes from our consultations with various AuScope Programs and will present initial plans for a co-designed, re-engineered AVRE platform to meet the expressed needs of a diverse range of DLT developers and users.

How to cite: Klump, J., Engelke, U., Fazio, V., Golodoniuc, P., Wyborn, L., and Rawling, T.: Reimagining the AuScope Virtual Research Environment Through Human-Centred Design, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3261, https://doi.org/10.5194/egusphere-egu22-3261, 2022.

EGU22-4265 | Presentations | ESSI2.3

EPOS Data portal for cross-disciplinary data access in the Solid Earth Domain 

Daniele Bailo, Jan Michalek, Keith G Jeffery, Kuvvet Atakan, and Rossana Paciello and the EPOS IT Team

The European Plate Observing System (EPOS) addresses the problem of homogeneous access to heterogeneous digital assets in geoscience of the European tectonic plate. Such access opens new research opportunities. Previous attempts have been limited in scope and required much human intervention. EPOS adopts an advanced Information and Communication Technologies (ICT) architecture driven by a catalogue of rich metadata. The architecture of the EPOS system together with challenges and solutions adopted are presented. The EPOS Data Portal is introducing a new way for cross-disciplinary research. The multidisciplinary research is raising new requirements both to students and teachers. The EPOS portal can be used either to explore the available datasets or to facilitate the research itself. It can be very instructive in teaching as well by demonstrating scientific use cases. 

EPOS ERIC had been established in 2018 as European Research Infrastructure Consortium for building a pan-European infrastructure and accessing solid Earth science data. The sustainability phase of the EPOS (EPOS-SP – EU Horison2020 – InfraDev Programme – Project no. 871121; 2020-2022) is focusing on finding solutions for the long-term sustainability of EPOS developments. The ambitious plan of geoscientific data integration started already in 2002 with a Conception Phase and continued by an EPOS-PP (Preparatory Phase, 2010-2014) where about 20 partners joined the project. The finished EPOS-IP project (EPOS-IP – EU Horison2020 – InfraDev Programme – Project no. 676564; 2015-2019) included 47 partners plus 6 associate partners from 25 countries from all over Europe and several international organizations.

The EPOS Data Portal provides access to data and data products from ten different geoscientific areas: Seismology, Near Fault Observatories, GNSS Data and Products, Volcano Observations, Satellite Data, Geomagnetic Observations, Anthropogenic Hazards, Geological Information and Modelling, Multi-scale laboratories and Tsunami Research. The Data portal Graphic User Interface (GUI) provides search functionalities to enable users to filter data by using several criteria (e.g. spatio-temporal extents, keywords, data/service providers, free-text); also, it enables users to pre-visualize data in Map, Tabular or Graph formats; the GUI finally provides details about the selected data (e.g., name, description, license, DOI), as well as to further refine the search in order to dig into a smaller level of granularity of data.

The presentation is showing achievements of the EPOS community with focus on the EPOS Data Portal which is providing information about available datasets from TCS and access to them. We are demonstrating not only features of the graphical user interface but also the underlying architecture of the whole system.

How to cite: Bailo, D., Michalek, J., Jeffery, K. G., Atakan, K., and Paciello, R. and the EPOS IT Team: EPOS Data portal for cross-disciplinary data access in the Solid Earth Domain, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4265, https://doi.org/10.5194/egusphere-egu22-4265, 2022.

EGU22-5407 | Presentations | ESSI2.3

Real-time Delivery of Sensor Data Streams using IoT and OGC Standards 

Simon Jirka, Christian Autermann, and Sebastian Drost

In the past, many projects have evaluated and demonstrated the use of the Sensor Web Enablement (SWE) Standards of the Open Geospatial Consortium (OGC) in order to publish sensor data. Advantages of these standards included the provision of a domain-independent approach for ensuring interoperability of interfaces, data, and metadata. However, in most cases, the developed infrastructures were limited to pull-based data retrieval patterns. This means that data consumers regularly query servers for data updates which may result in high server loads due to a high-frequency of update requests or increased latencies until a consumer receives new sensor data.


Although there were relevant specifications such as the OGC Publish/Subscribe standard as well as discussion papers, the OGC SWE framework never included a widely accepted solution to handle an active push-based delivery of observation data. With the adaptation of the SensorThings API standard of the OGC in conjunction with mainstream Internet of Things protocols such as the Message Queuing Telemetry Transport (MQTT) protocol this has changed in recent years.


In 2020 we have already presented at the EGU an approach on how to use these technologies to enable the efficient collection of sensor observation data in hydrological application by bridging between sensors and data management servers (Drost et al., 2020).


As part of this contribution, we will discuss the applicability of these technologies, OGC SensorThings API as well as MQTT, to also cover the delivery of data to consumers in addition to the previously described data transmission from sensor devices to a data sink. We will put special emphasis on experiences gathered from the deployment in marine environments (e.g., live underway data and event metadata streams of research vessels), as part of the EMODnet Ingestion II project. Special consideration will be given to a discussion of potential advantages of push-based communication patterns as well as identified challenges for future work (e.g., metadata about push-based data streams, standardization of payloads, access control, best practices on how to structure provided data streams).


Furthermore, we will address the development of data visualization tools for such interoperable real-time data streams and will discuss the opportunities to transfer these technologies to further application domains such as hydrology.


References

Drost, S., Speckamp, J., Hollmann, C., Malewski, C., Rieke, J., & Jirka, S. (2020). Internet of Things Technologies for the Efficient Collection of Hydrological Measurement Data. EGU General Assembly 2020, Online. https://doi.org/10.5194/egusphere-egu2020-10452

How to cite: Jirka, S., Autermann, C., and Drost, S.: Real-time Delivery of Sensor Data Streams using IoT and OGC Standards, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5407, https://doi.org/10.5194/egusphere-egu22-5407, 2022.

EGU22-5478 | Presentations | ESSI2.3

EPOS-Norway Portal 

Jan Michalek, Kuvvet Atakan, Christian Rønnevik, Sara Kverme, Lars Ottemøller, Øyvind Natvik, Tor Langeland, Ove Daae Lampe, Gro Fonnes, Jeremy Cook, Jon Magnus Christensen, Ulf Baadshaug, Halfdan Pascal Kierulf, Bjørn-Ove Grøtan, Odleiv Olesen, John Dehls, and Valerie Maupin

The European Plate Observing System (EPOS) is a European project about building a pan-European infrastructure for accessing solid Earth science data, governed now by EPOS ERIC (European Research Infrastructure Consortium). The EPOS-Norway project (EPOS-N; RCN-Infrastructure Programme - Project no. 245763) is a Norwegian project funded by National Research Council. The aim of the Norwegian EPOS e‑infrastructure is to integrate data from the seismological and geodetic networks, as well as the data from the geological and geophysical data repositories. Among the six EPOS-N project partners, four institutions are providing data – University of Bergen (UIB), - Norwegian Mapping Authority (NMA), Geological Survey of Norway (NGU) and NORSAR.

In this contribution, we present the EPOS-Norway Portal as an online, open access, interactive tool, allowing visual analysis of multidimensional data. It supports maps and 2D plots with linked visualizations. Currently access is provided to more than 300 datasets (18 web services, 288 map layers and 14 static datasets) from four subdomains of Earth science in Norway. New datasets are planned to be integrated in the future. EPOS-N Portal can access remote datasets via web services like FDSNWS for seismological data and OGC services for geological and geophysical data (e.g. WMS). Standalone datasets are available through preloaded data files. Users can also simply add another WMS server or upload their own dataset for visualization and comparison with other datasets. This portal provides unique way (first of its kind in Norway) for exploration of various geoscientific datasets in one common interface. One of the key aspects is quick simultaneous visual inspection of data from various disciplines and test of scientific or geohazard related hypothesis. One of such examples can be spatio-temporal correlation of earthquakes (1980 until now) with existing critical infrastructures (e.g. pipelines), geological structures, submarine landslides or unstable slopes. 

The EPOS-N Portal is implemented by adapting Enlighten-web, a server-client program developed by NORCE. Enlighten-web facilitates interactive visual analysis of large multidimensional data sets, and supports interactive mapping of millions of points. The Enlighten-web client runs inside a web browser. An important element in the Enlighten-web functionality is brushing and linking, which is useful for exploring complex data sets to discover correlations and interesting properties hidden in the data. The views are linked to each other, so that highlighting a subset in one view automatically leads to the corresponding subsets being highlighted in all other linked views.

How to cite: Michalek, J., Atakan, K., Rønnevik, C., Kverme, S., Ottemøller, L., Natvik, Ø., Langeland, T., Lampe, O. D., Fonnes, G., Cook, J., Christensen, J. M., Baadshaug, U., Kierulf, H. P., Grøtan, B.-O., Olesen, O., Dehls, J., and Maupin, V.: EPOS-Norway Portal, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5478, https://doi.org/10.5194/egusphere-egu22-5478, 2022.

EGU22-5931 | Presentations | ESSI2.3

C-SCALE: A new Data and Compute Federation for Earth Observation 

Christian Briese, Charis Chatzikyriakou, Diego Scardaci, Zdeněk Šustr, Enol Fernández, Björn Backeberg, and Elonora Testa

Through the provision of massive streams of high-resolution Earth Observation (EO) data, the EU Copernicus programme has established itself globally as the predominant spatial data provider. These data are widely used by research communities to monitor and address global challenges, such as environmental monitoring and climate change, supporting European policy initiatives, such as the Green Deal and others. To date, there is no single European data sharing and processing infrastructure that serves all datasets of interest, and Europe is falling behind international developments in Big Data analytics and computing.

The C-SCALE (Copernicus - eoSC AnaLytics Engine, https://c-scale.eu) project federates European EO infrastructure services, such as ESA’s Sentinel Collaborative Ground Segment, the Copernicus DIASes (Data and Information Access Services under the EC), independent nationally-funded EO service providers, and European Open Science Cloud (EOSC) e-infrastructure providers. It capitalises on EOSC's capacity and capabilities to support Copernicus research and operations with large and easily accessible European computing environments. The project will implement and publish the C-SCALE Federation in the EOSC Portal as a suite of complementary services that can be easily exploited. It will consist of a Data Federation, a service providing access to a large EO data archive, a Compute Federation, and analytics tools.

The C-SCALE Data Federation aims at making EO data providers under EOSC findable, their metadata databases searchable, and their product storage accessible. While a centralised, monolithic, complete Copernicus data archive may not be feasible, some organisations maintain various archives for limited areas of their interest. C-SCALE, therefore, integrates these heterogeneous resources into a “system of systems” that will offer the users an interface that, in most cases, provides similar functionality and quality of service as a centralised, monolithic data archive would. The federation is built on existing technologies, avoiding redundancy and replication of functions and not disrupting existing usage patterns at participating sites, instead only adding a simple layer for improved discovery and seamless access.

At the same time, the C-SCALE Compute Federation provides access to a wide range of computing providers (IaaS VMs, container orchestration platforms, HPC and HTC systems) to enable the analysis of Copernicus and EO data under EOSC. The design of the federation allows users to deploy their applications using federated authentication mechanisms, find their software under a common catalogue, and have access to data using C-SCALE Data Federation tools. The federation relies on existing tools and services already compliant with EOSC, thus facilitating the integration into the larger EOSC ecosystem.

By making such a scalable Big Copernicus Data Analytics federated services available through EOSC and its Portal and linking the problems and results with experience from other research disciplines, C-SCALE helps to support the EO sector in its development. By abstracting the set-up of computing and storage resources from the end-users, it enables the deployment of custom workflows to generate meaningful results quickly and easily. Furthermore, the project will deliver a blueprint, setting up an interaction model between service providers to facilitate interoperability between commercial and public cloud infrastructures.

How to cite: Briese, C., Chatzikyriakou, C., Scardaci, D., Šustr, Z., Fernández, E., Backeberg, B., and Testa, E.: C-SCALE: A new Data and Compute Federation for Earth Observation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5931, https://doi.org/10.5194/egusphere-egu22-5931, 2022.

EGU22-6628 | Presentations | ESSI2.3

EPOS-GNSS – Current status of service implementation for European GNSS data and products 

Rui Fernandes, Carine Bruyninx, Paul Crocker, Anne Socquet, and Mathilde Vergnolle and the EPOS-GNSS Members

EPOS-GNSS is the Thematic Core Service being implemented in the framework of the European Plate Observing System (EPOS) focused on management and dissemination of GNSS (Global Navigation Satellite Systems) Data and Products. The European Research Infrastructure Consortium (ERIC) has provided to EPOS a legal personality and capacity that is recognised in all EU Member States that permits to provide open access to a large pool of Solid Earth science integrated data, data products and facilities for researchers.

The GNSS community in Europe is benefiting from EPOS ERIC to create mechanisms and procedures to harmonize, in collaboration with other pan-European infrastructures (particularly EUREF), the access to GNSS data, metadata and derived products (time-series, velocities, and strain rate maps) that primarily are the interest of the Solid Earth community but ultimately benefit many other stakeholders, particularly data providers and other scientific and technical applications.

In this presentation we focus on the three main components that since last year have entered in the pre-operational phase: (a) Governance – with the aim that the entire community, from data providers to end-users, will be represented and their efforts recognized; (b) GLASS – the in-house dedicated software package developed for the dissemination of GNSS data and products with rigorous quality control procedures; (c) Products – internally consistent GNSS solutions of dedicated products (time-series, velocities and strain-rates) created from the available data set using state-of-art methodologies to be used to improve the understanding of the different Solid Earth mechanisms taken place in the European region.

How to cite: Fernandes, R., Bruyninx, C., Crocker, P., Socquet, A., and Vergnolle, M. and the EPOS-GNSS Members: EPOS-GNSS – Current status of service implementation for European GNSS data and products, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6628, https://doi.org/10.5194/egusphere-egu22-6628, 2022.

EGU22-8474 | Presentations | ESSI2.3

The construction of the eLTER Pan-European research infrastructure to support multidisciplinary environmental data integration and analysis 

John Watkins, Johannes Peterseil, Alessandro Oggioni, and Vladan Minic

One of the major goals of the upcoming European integrated Long-Term Ecosystem  critical zone and socio-ecological Research Infrastructure (eLTER RI) is to provide reliable and quality-controlled long-term environmental data from various disciplines for scientific analysis as well as the assessment of environmental policy impacts. For this purpose, eLTER has been designing and piloting a federated data infrastructure for integration and dissemination of a broad range of in situ observations and related data.
Implementing such a pan-European environmental data infrastructure is a lengthy and complex process driven by user needs, shareholder requirements and general service and technology best practises. The European LTER community has laid the foundations of this eLTER Information System. For further improvements, user needs have recently been collected by (a) targeted interviews with selected stakeholders to identify requirements, (b) workshops mapping requirements to potential RI services, and (c) analysis work for designing the RI service portfolio for. The requirements collections are used to derive functional (i.e. the behaviour of essential features of the system) and non-functional (i.e. the general characteristics of the system) requirements for the IT infrastructure and services. These collected requirements revolve around the development of workflows for the ingestion, curation and publication of data objects including the creation, harvesting, discovery and visualisation of metadata as well as providing means to support the analysis of these datasets and communicating study results.
Considering that downstream analyses of data from both eLTER and other RIs are a key part of the RI´s scope the design includes virtual collaborative environments where different data and analyses can be brought together and results shared with FAIR principles as the default for research practice. The eLTER RI will take advantage of data stored in existing partner data systems, harmonised by a central discovery portal and federated data access components providing a common information management infrastructure for bridging across environmental RIs.
This presentation will provide an overview of the current stage of the eLTER RI developments as well as its major components, provide an outlook for future developments and discuss the technical and scientific challenges of building the eLTER RI for interdisciplinary data sharing.

How to cite: Watkins, J., Peterseil, J., Oggioni, A., and Minic, V.: The construction of the eLTER Pan-European research infrastructure to support multidisciplinary environmental data integration and analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8474, https://doi.org/10.5194/egusphere-egu22-8474, 2022.

EGU22-8537 | Presentations | ESSI2.3

INSITUDE: web-based collaborative platform for centralized oceanographic data reception, management, exploration, and analysis 

Dmitry Khvorostyanov, Victor Champonnois, Alain Laupin-Vinatier, Jacqueline Boutin, Gilles Reverdin, Nathalie Lefèvre, Antonio Lourenco, Alban Lazar, Jean-Benoit Charrassin, and Frédéric Vivier

LOCEAN laboratory of the Pierre and Simon Laplace Insitute (IPSL) is in charge of a number of scientific projects and measurement campaigns that result in a large flow of heterogeneous oceanographic data managed at LOCEAN. The data are of various origins and include in situ data from buoys, ships, moorings, marine mammals and satellite missions for salinity, altimetry, ocean color, temperature. LOCEAN also has an instrumental development team that designs and deploys buoys in various parts of the global ocean, with a need to receive and track the data in the near-real time. The data PIs can be involved in different research groups and projects, and while focusing on providing their data, they might need to collaborate with other teams providing complementary datasets.

To address these needs, the INSITUDE platform is developed at LOCEAN with these goals in mind: (1) receive, manage, track in the near-real time, and explore diverse data; (2) assist scientific experts in the data quality control; (3) facilitate cross-uses of insitu and satellite data available at LOCEAN.

The software consists of four components: (1) Django application for the meta-data management; (2) Data processing software (Python); (3) Flask application for server-side interactions with the database; (4) Interactive data exploration/validation front-end.

The basic workflow involves the following steps:

(1) The user specifies the relevant meta-data using the web interface of the Django application; the meta-data database is thus updated;

(2) The processing core is launched automatically at regular times during a day: it reads the meta-data from the database, queries the mailboxes and/or external web services for the data requested, receives, decodes and processes the data, and fills the measurements database. It also generates ASCII data files for selected datasets, which can be downloadable via dedicated web pages or can be used for processing with external user programs (e.g. matlab or python scripts);

(3) The data stored in the measurements database can be interactively explored using DataViewer applications, allowing zoomable views of time series, vertical profiles, and trajectories shown on the virtual globe. Data from different campaigns and for different variables can be viewed together. The quality control assistant allows experts to seamlessly validate the data by assigning quality flags to selected data points or regions, optionally after computing relevant statistics. The validated data can then be visualized and saved based on desired quality flag values.

The INSITUDE platform facilitates data sharing across multiple teams and collaborations between data providers and data experts, researchers and engineers, enabling research projects focused on cross-exploration of various datasets, studies of processes involving both in situ and satellite data, and interpretation of in situ data in a larger-scale context owing to the satellite data. The system offers centralized intuitive acquisition control and access to the data received, along with the related meta-data (projects, campaigns, buoys, people, etc.), facilitates data quality control/validation.

The INSITUDE platform is currently used at LOCEAN and can be deployed in data centers of national data infrastructures, such as the French ODATIS/DATA TERRA.

How to cite: Khvorostyanov, D., Champonnois, V., Laupin-Vinatier, A., Boutin, J., Reverdin, G., Lefèvre, N., Lourenco, A., Lazar, A., Charrassin, J.-B., and Vivier, F.: INSITUDE: web-based collaborative platform for centralized oceanographic data reception, management, exploration, and analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8537, https://doi.org/10.5194/egusphere-egu22-8537, 2022.

EGU22-8862 | Presentations | ESSI2.3

ENVRI-Hub, the open-access platform of the environmental sciences community in Europe: a closer look into the architecture 

Ana Rita Gomes, Angeliki Adamaki, Alex Vermeulen, Ulrich Bundke, and Andreas Petzold

The ENVRI-FAIR project brings together the ESFRI environmental research infrastructures (ENVRI) that provide environmental data and services, with the aim of making their resources compliant to the FAIR principles. To achieve this goal , the required work is mostly technical, with the ENVRIs working towards not only improving the FAIRness of their own data and services, but also reflecting their efforts at a higher level by becoming FAIR as a cluster. The approach of this task cannot be linear as it requires harmonization of efforts at different dimensions. To build on a common ground, the most crucial technical gaps have been prioritized and the ENVRIs identify common requirements and design patterns, and collaborate on making good use of existing technical solutions that improve their FAIRness.

 

One of the highest ranked priorities, and obviously among the biggest challenges, is the design of a machine actionable ENVRI Catalogue of Services that also supports the integration into the EOSC. Through this catalogue the service providers will be able to make their assets findable and accessible by mapping their resources into common and rich metadata standards, while by means of a web application the human interaction with the FAIR services can be accomplished. The design of this application, named the ENVRI-Hub, is discussed here. Other aspects related to the ENVRI services, e.g. the use of PIDs, the use of relevant vocabularies, tracking license information and provenance etc. are also investigated.

 

Considering the ENVRI-Hub as a web application, this can act as an integrator by bringing together already existing ENVRI services and interoperable services across research infrastructure boundaries . Exploring the potentials of the ENVRI-Hub already from the design phase, the ingestion of metadata from ENVRI assets such as the ENVRI Knowledge Base, the ENVRI Catalogue of Services and the ENVRI Training Catalogue is investigated, aiming to provide the users with functionalities that are relevant to e.g. the discovery of environmental observations, services, tutorials and other available resources. The chosen architectural pattern for the development of the ENVRI-Hub can be compared to a classical n-tier architecture, comprising 1) a data tier, 2) a logic tier and 3) a presentation tier. To integrate the different ENVRI platforms while preserving the application’s independence, the ENVRI-Hub demonstrator aims to replicate an instance of the Knowledge Base and Catalogue of Services. Following a centralised architectural approach, the ENVRI-Hub serves as a harvester entity, collecting data and metadata from the ENVRI Knowledge Base and the ENVRI Catalogue of Services, therefore bringing together these ENVRI platforms into one single portal.

 

Acknowledgement: ENVRI-FAIR has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824068.

This work is only possible with the collaboration of the ENVRI-FAIR partners and thanks to the joint efforts of the whole ENVRI-Hub team.

How to cite: Gomes, A. R., Adamaki, A., Vermeulen, A., Bundke, U., and Petzold, A.: ENVRI-Hub, the open-access platform of the environmental sciences community in Europe: a closer look into the architecture, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8862, https://doi.org/10.5194/egusphere-egu22-8862, 2022.

EGU22-8905 | Presentations | ESSI2.3

Developing a Next Generation Platform for Geodetic, Seismological and Other Geophysical Data Sets and Services 

Chad Trabant, Henry Berglund, Jerry Carter, and David Mencin

The Data Services of IRIS and the Geodetic Data Services of UNAVCO have been supporting the seismological and geodetic research communities for many decades.  Historically, these two facilities have independently managed data repositories on self-managed systems.  As part of merger activities between IRIS and UNAVCO, we have established a project to design, develop and implement a common, cloud-based platform.  Goals of this project include operational improvements such as cost-effectiveness, robustness, on-demand scalability, significant growth potential and increased adaptability for new data types.  While we expect a number of operational improvements, we anticipate a number of additional benefits for the research communities we serve.

The new platform will provide services for data queries across the internal repositories.  This will provide researchers with an easier path to discovery, and access to integratable data sets of related geophysical data.

Researchers will be able to conduct their data processing in the same, or data-proximate, cloud as the platform, taking advantage of copious and affordable computation offered by such environments.  Following the paradigm of moving the computation to the data, this will avoid the time and resource consuming need to transfer the data over the internet.  Furthermore, the adoption of cloud-optimized data containers and direct access by researchers will support efficient processing.  In cases where transferring large volumes of data are still necessary, the large capacity of cloud storage systems will allow enhanced mechanisms such as Globus for transfer, which we will be exploring.

For many users a transition of the data repositories to a new environment will be nearly seamless.  This will be made possible by implementing many of the same services already supported by the current facilities, such as the suite of FDSN web services.  The project is currently in a prototyping stage, and we anticipate having a complete design by the end of 2022.  We will report on the status of the project, anticipated directions and challenges identified so far.

How to cite: Trabant, C., Berglund, H., Carter, J., and Mencin, D.: Developing a Next Generation Platform for Geodetic, Seismological and Other Geophysical Data Sets and Services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8905, https://doi.org/10.5194/egusphere-egu22-8905, 2022.

EGU22-9421 | Presentations | ESSI2.3

Breaking the barriers to interdisciplinarity: Contributions from the Environmental Research Infrastructures 

Angeliki K. Adamaki, Ana Rita Gomes, Alex Vermeulen, Ari Asmi, and Andreas Petzold

As science and technology evolve, interdisciplinary targets are anything but static, introducing additional levels of complexity and challenging further the initiatives to break the barriers to interdisciplinary research. For over a decade the community of the Environmental Research Infrastructures, forming the ENVRI cluster, has been building strong foundations to overcome these challenges and benefit the environmental sciences. One of the overarching goals of the ENVRI cluster is to provide more FAIR (Findable, Accessible, Interoperable and Reusable) data and services which will be open to everyone who wishes to get access to environmental observations, from scientists and research communities of scientifically diverse clusters to curious citizens, data scientists and policy makers.

Starting with domain-specific use cases we further explore potential cross-domain cases, e.g. in the form of environmental science stories crossing disciplinary boundaries. A set of Jupyter Notebooks developed by the contributing Research Infrastructures (and accessible from a hub of services called the ENVRI-Hub) are promising tools to demonstrate and validate the capabilities of service provision among ENVRIs and across Science Clusters, and act as examples of what a user can achieve through the ENVRI-Hub. In one of the examples we investigate, a user-friendly well-structured Jupyter Notebook that makes use of research infrastructures’ application programming interfaces (APIs) jointly plots in a map the geographical locations of several Marine and Atmospheric stations (where the stations in this example are defined as measurement points actively collecting data). The FAIR principles provide a firm foundation defining the layer that supports the ENVRI-Hub structure and the preliminary results are promising. Considering that the APIs can become discoverable via a common ENVRI catalogue, the ENVRI-Hub aims to make full use of the machine-actionability of such a catalogue in the future to facilitate this kind of use case execution in the Hub itself.

Acknowledgement: ENVRI-FAIR has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824068. This work is only possible with the collaboration of the ENVRI-FAIR partners and thanks to the joint efforts of the whole ENVRI team.

How to cite: Adamaki, A. K., Gomes, A. R., Vermeulen, A., Asmi, A., and Petzold, A.: Breaking the barriers to interdisciplinarity: Contributions from the Environmental Research Infrastructures, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9421, https://doi.org/10.5194/egusphere-egu22-9421, 2022.

EGU22-10071 | Presentations | ESSI2.3

Facilitating Multi-Disciplinary Research via Integrated Access to the Seismological Data & Product Services of EPOS Seismology 

Florian Haslinger, Lars Ottemöller, Carlo Cauzzi, Susana Custodio, Rémy Bossu, Alberto Michelini, Fabrice Cotton, Helen Crowley, Laurentiu Danciu, Irene Molinari, and Stefano Parolai

The European Plate Observing System EPOS is the single coordinated framework for solid Earth science data, products and services on a European level. As one of the science domain structures within EPOS, EPOS Seismology brings together the three large European infrastructures in seismology: ORFEUS for seismic waveform data & related products, EMSC for parametric earthquake information, and EFEHR for seismic hazard and risk information. Across these three pillars, EPOS Seismology provides services to store, discover and access seismological data and products from raw waveforms to elaborated hazard and risk assessment.

ORFEUS, EMSC and EFEHR are community initiatives / infrastructures that each have their own history, structure, membership, governance and established mode of work (including data sharing and distribution practices), developed in parts over decades. While many institutions and individuals are engaged in more than one of these initiatives, overall the active membership is quite distinct. Also, each of the initiatives has different connections to and interactions with other international organisations. Common to all is the adoption and promotion of recognized international standards for data, products and services originating from wider community organisations (e.g. FDSN, IASPEI, GEM), and the active participation in developing those further or creating new ones together with the community.     

In this presentation we will briefly review the history and development of the three initiatives and discuss how we set up EPOS Seismology as a joint coordination framework within EPOS. We will highlight issues encountered on the way and those that we are still trying to solve in our attempt to create and operate a coordinated research infrastructure that appropriately serves the needs of today’s scientific community. Among those issues is also the ‘timeliness’ of data and products: while a number of services offer almost-real-time access to newly available information at least in theory, this comes with various downstream implications that are currently actively discussed. We also cover the envisaged role of EPOS Seismology in supporting international multi-disciplinary activities that require and benefit from harmonized, open, and interoperable data, products, services and facilities from the waveform, catalogue and hazard / risk domains of seismology.

How to cite: Haslinger, F., Ottemöller, L., Cauzzi, C., Custodio, S., Bossu, R., Michelini, A., Cotton, F., Crowley, H., Danciu, L., Molinari, I., and Parolai, S.: Facilitating Multi-Disciplinary Research via Integrated Access to the Seismological Data & Product Services of EPOS Seismology, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10071, https://doi.org/10.5194/egusphere-egu22-10071, 2022.

EGU22-10897 | Presentations | ESSI2.3

Progressing the global samples community through the new partnership between IGSN and DataCite 

Sarah Ramdeen, Kerstin Lehnert, Jens Klump, Matt Buys, Sarala Wimalaratne, and Lesley Wyborn

In October 2021, DataCite and the IGSN e.V. signed an agreement to form a partnership to support the global adoption, implementation, and use of physical sample identifiers. Both DataCite and IGSN currently offer the ability to provide Globally Unique Persistent, Resolvable Identifiers (GUPRIs) within the overall research ecosystem, and the proposed collaboration will bring together the strengths of each organization.

DataCite is a community-led organisation that has been providing the means to create, find, cite, connect, and use research across 47 countries globally since 2009. DataCite provides persistent identifiers (DOIs) for research data and other research outputs, and supports the efforts of several identifier communities. DataCite also develops services that make it easier for researchers to connect and share their DOIs with the broader research ecosystem. 
IGSN e.V. is an international, non-profit organization with more than 20 members and has a narrower focus than DataCite. The core purpose of IGSN is to enable transparent and traceable connections between samples, instruments, grants, data, publications, people and organizations. Since 2011, IGSN has provided a central registration system that enables researchers to apply a globally unique and persistent identifier for physical samples.

The proposed partnership will enable IGSN to leverage DataCite DOI registration while allowing IGSN to focus on community efforts such as promoting and expanding the global samples ecosystem and supporting new research and best practice in methods of identifying, citing, and locating physical samples. DataCite will provide the IGSN ID registration services and support to ensure the ongoing sustainability of the IGSN PID infrastructure and its integration with the global PID ecosystem.

This partnership is an opportunity for IGSN to reenvision its governance and community engagement and to reassess how the IGSN can best serve the community in today’s open science ecosystem. This talk will focus on the developing changes to the IGSN governance and community efforts.
Different research communities have a wide range of requirements towards metadata and identification of samples. The IGSN plans to develop an international ‘Community of Communities’ which will include members from the global samples community across multiple disciplines. The community will support varying levels of skills with PIDs and metadata. It will enable cohesion around the use of IGSN thus enabling greater research discovery, innovation and advancement for samples.

The IGSN Samples Community (IGSN SC) aspires to be a collaborative space for community development that promotes the use of samples and their connections to any derived observations, images, and analytical data.

How to cite: Ramdeen, S., Lehnert, K., Klump, J., Buys, M., Wimalaratne, S., and Wyborn, L.: Progressing the global samples community through the new partnership between IGSN and DataCite, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10897, https://doi.org/10.5194/egusphere-egu22-10897, 2022.

EGU22-11968 | Presentations | ESSI2.3

Proposed metadata standards for FAIR access to GNSS data 

Anna Miglio, Andras Fabian, Carine Bruyninx, Stefanie De Bodt, Juliette Legrand, Paula Oset Garcia, and Inge Van Nieuwerburgh

Accurate positioning for activities such as navigation, mapping, and surveying rely on permanent stations located all over the world and continuously tracking Global Navigation Satellite Systems (GNSS, such as Galileo, GPS, GLONASS). 
The Royal Observatory of Belgium maintains repositories containing decades of observation data from hundreds of GNSS stations belonging to Belgian and European networks (e.g., the EUREF public repository). 
However, current procedures for accessing GNSS data do not adequately serve user needs. For example, in the case of the EUREF repository, despite the fact that its GNSS data originate from a significant number of data providers and could be handled in different ways, provenance information is lacking and data licenses are not always available.
In order to respond to user demands, GNSS data and the associated metadata need to be standardised, discoverable and interoperable i.e., made FAIR (Findable, Accessible, Interoperable, and Re-usable). Indeed, FAIR data principles serve as guidelines for making scientific data suitable for reuse, by both people and machines, under clearly defined conditions. 
We propose to identify existing metadata standards that cover the needs of the GNSS community to the maximum extent and to extend them and/or to develop an application profile, considering also best practices at other GNSS data repositories. 

Here we present two proposals for metadata to be provided to the users when querying and/or downloading GNSS data from GNSS data repositories. 
We first consider metadata containing station-specific information (e.g., station owner, GNSS equipment) and propose an extension of GeodesyML, an XML implementation of the eGeodesy model aligned with international standards such as ISO19115-1:2014 and OGC's GML. The proposed extension contains additional classes and properties from domain specific vocabularies when necessary, and includes extra metadata such as data license, file provenance information, etc. to comply with FAIR data principles. All proposed changes to GeodesyML are optional and therefore guarantee full backwards compatibility. 

Secondly, we consider metadata related to GNSS observation data i.e. RINEX data files. We propose an application profile based on the specifications of the Data Catalog Vocabulary (DCAT), a RDF vocabulary that, by design, facilitates the interoperability between data portals (supporting DCAT-based RDF documents) and enables publishing metadata directly on the web by using different formats.
In particular, our proposal (GNSS-DCAT-AP) includes new recommended metadata classes to describe the specific characteristics of GNSS observation data: the type of RINEX file (e.g., compression format, frequency); the RINEX file header and information regarding the GNSS station including the GNSS antenna and receiver; the software used to generate the RINEX  file. Additional optional classes allow the inclusion of information regarding the GNSS antenna, receiver and monument associated with the GNSS station and extracted from the IGS site log or GeodesyML files

How to cite: Miglio, A., Fabian, A., Bruyninx, C., De Bodt, S., Legrand, J., Oset Garcia, P., and Van Nieuwerburgh, I.: Proposed metadata standards for FAIR access to GNSS data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11968, https://doi.org/10.5194/egusphere-egu22-11968, 2022.

EGU22-946 | Presentations | ESSI2.7

Lossy Scientific Data Compression With SPERR 

Samuel Li and John Clyne

Much of the research in lossy data compression has focused on minimizing the average error for a given storage budget. For scientific applications, the maximum point-wise error is often of greater interest than the average error. This paper introduces an algorithm that encodes outliers—data points exceeding a specified point-wise error tolerance—produced by a lossy compression algorithm optimized for minimizing average error. These outliers can then be corrected to be within the error tolerance when decoding. We pair this outlier coding algorithm with an in-house implementation of SPECK, a lossy compression algorithm based on wavelets that exhibits excellent rate-distortion performance (where distortion is measured by the average error), and introduce a new lossy compression product that we call SPERR. Compared to two leading scientific data compressors, SPERR uses less storage to guarantee an error bound and produces better overall rate-distortion curves at a moderate cost of added computation. Finally, SPERR facilitates interactive data exploration by exploiting the multiresolution properties of wavelets and their ability to reconstruct coarsened data volumes on the fly.

How to cite: Li, S. and Clyne, J.: Lossy Scientific Data Compression With SPERR, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-946, https://doi.org/10.5194/egusphere-egu22-946, 2022.

EGU22-1149 | Presentations | ESSI2.7

Practical notes on lossy compression of scientific data 

Rostislav Kouznetsov

Lossy compression methods are extremely efficient in terms of space and performance and allow for reduction of network bandwidth and disk space needed to store data arrays without sacrificing the number of stored values.  Lossy compression involves an irreversible transformation of data that reduces the information content of the data.  The transformation introduces a distortion that is normally measured in terms of absolute or relative error. The error is higher for higher compression ratios.  A good choice of lossy compression parameters maximizes the compression ratio while keeping the introduced error within acceptable margins.  Negligence or failure to chose a right compression method or its parameters leads to poor compression ratio, or loss of the data.

A good strategy for lossy compression would involve sepcification of the acceptible error margin and choice of compression parameters and storage format. We will discuss specific techniques of lossy compression, and illustrate pitfalls in choice of the error margins and tools for lossy/lossless compression. The following specific topics will be covered:

1. Packing of floating-point data to integers in NetCDF is sub-optimal in most cases,   and for some quantities leads to severe errors.
2. Keeping relative vs absolute precision: false alternative.
3. Acceptible error margin depends on both the origin and the intended application of data.
4. Smart algorithms to decide on compression parameters have limited area of applicability,   which has to be considered in each individual case.
5. Choice of a format for compressed data (NetCDF, GRIB2, Zarr): tradeoff between size, speed and precision.
6. What "number_of_significant_digits" and "least_significant_digit" mean in terms of relative/absolute error.
7. Bit-Shuffle is not always beneficial.

How to cite: Kouznetsov, R.: Practical notes on lossy compression of scientific data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1149, https://doi.org/10.5194/egusphere-egu22-1149, 2022.

EGU22-2431 | Presentations | ESSI2.7

Exploiting GPU capability in the fully spectral magnetohydrodynamics code QuICC 

Dmitrii Tolmachev, Andrew Jackson, Philippe Marti, and Giacomo Castiglioni

QuiCC is a code designed to solve the equations of magnetohydrodynamics in a full sphere and other geometries. The aim is to provide understanding of the dynamo process
that sustains planetary magnetic fields for billions of years by thermally-driven convective motion of an electrically-conducting fluid. It also aims to provide the first
clues as to how and why the magnetic fields can undergo reversals. The code must solve the coupled equations of conservation of momentum (the Navier Stokes equation), Maxwell's equations of electrodynamics and the equation of heat transfer. For accuracy and to facilitate imposition of boundary conditions,
a fully spectral method is used in which angular variables in a spherical polar coordinate system are expanded in spherical harmonics, and radial variables are expanded in a
special polynomial expansion in Jones-Worland polynomials. As a result the coordinate singularities at the north and south poles and at the origin disappear.
The code is designed to run on upward of 10^4 processors using MPI and shows excellent scaling.
At the heart of the method is the ability to move between physical and spectral space by a variety of exact transforms: these involve the well-known Fast Fourier Transform (FFT) and also the Legendre transform and Jones-Worland transform.
In this talk we will focus on the latest advancements in the field of fast GPU algorithms for these types of discrete transforms. We present an extension to the publicly-released VkFFT library - GPU Fast Fourier Transform library for Vulkan, CUDA, HIP and OpenCL, that allows the calculation of the Discrete Cosine Transforms of types I-IV. This is a very exciting addition to what VkFFT can do as DCTs are often used in image processing, data compression and numerous other scientific tasks.
So far, this is the first publicly available optimized GPU implementation of DCTs. We also present our progress in creating efficient Spherical Harmonic transforms (SHTs) and radial transforms using  GPU implementations.  This talk will present Jones-Worland and Associated Legendre Polynomial Transforms for modern GPU architectures, implemented based on the VkFFT runtime kernel optimization model. Combined, they can be used to create a new era of full-sphere models for planetary simulations in geophysics.

How to cite: Tolmachev, D., Jackson, A., Marti, P., and Castiglioni, G.: Exploiting GPU capability in the fully spectral magnetohydrodynamics code QuICC, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2431, https://doi.org/10.5194/egusphere-egu22-2431, 2022.

EGU22-2746 | Presentations | ESSI2.7

Scalable Offshore Wind Analysis With Pangeo 

Derek O'Callaghan and Sheila McBreen

The expansion of renewable energy portfolios to utilise offshore wind resources is a key objective of energy policies focused on the generation of low carbon electricity. Wind atlases have been developed to provide energy resources maps, containing information on wind speeds and related variables at multiple heights above sea level for offshore regions of interest (ROIs). However, these atlases are often associated with legacy projects, where access to corresponding data products may be restricted preventing further development by third parties. Reliable, long-term observations are crucial inputs to the offshore wind farm area assessment process, with observations typically measured close to the ocean surface using in situ meteorological masts. Remote sensing techniques have been proposed to address resolution and coverage issues associated with in situ measurements, in particular, the use of space-borne Earth Observation (EO) instruments for ocean and sea surface wind estimations. In recent years, a variety of initiatives have emerged that provide public access to wind speed data products, which have potential for application in wind atlas development and offshore wind farm assessment. Combining products from multiple data providers is challenging due to differences in spatial and temporal resolution, product access, and product formats. In particular, the associated large dataset sizes are significant obstacles to data retrieval, storage, and subsequent computation. The traditional process of retrieval and local analysis of a relatively small number of ROI products is not readily scalable to accommodate longitudinal studies of multiple ROIs. 

This work presents a case study that demonstrates the utility of the Pangeo software ecosystem to address these issues in the development of offshore wind speed and power density estimations, increasing wind measurement coverage of offshore renewable energy assessment areas in the Irish Continental Shelf region. The Intake library is used to manage a new data catalog created for this region, consisting of a collection of analysis-ready, cloud-optimized (ARCO) datasets generated using the Zarr format. This ARCO catalog features up to 21 years of available in situ, reanalysis, and satellite observation data products. The xarray and Dask libraries enable scalable catalog processing, including analysis of provided data variables and derivation of new variables as required for candidate wind farm ROIs, avoiding redundant storage and processing requirements for regions not under assessment. Individual catalog datasets have been regridded to relevant spatial grids, or appropriately chunked in time and space, by means of the xESMF and Rechunker libraries respectively. A set of Jupyter notebooks has been created to demonstrate catalog visualization and processing, following the conventions of notebooks in the current Pangeo Gallery. These notebooks provide detailed descriptions of each ARCO dataset, along with an evaluation of wind speed extrapolation and power density estimation methods. The employment of new approaches such as Pangeo Forge for future catalog and dataset creation is also explored. This case study has determined that the Pangeo ecosystem approach is extremely beneficial in the development of open architectures operating on large volumes of disparate data, while also contributing to the objectives of scientific code sharing and reproducibility.

How to cite: O'Callaghan, D. and McBreen, S.: Scalable Offshore Wind Analysis With Pangeo, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2746, https://doi.org/10.5194/egusphere-egu22-2746, 2022.

EGU22-3095 | Presentations | ESSI2.7

Fluid simulations accelerated with 16 bits: Approaching 4x speedup on A64FX by squeezing ShallowWaters.jl into Float16 

Milan Klöwer, Samuel Hatfield, Matteo Croci, Peter D. Düben, and Tim Palmer

Most Earth-system simulations run on conventional CPUs in 64-bit double precision floating-point numbers Float64, although the need for high-precision calculations in the presence of large uncertainties has been questioned. Fugaku, currently the world’s fastest supercomputer, is based on A64FX microprocessors, which also support the 16-bit low-precision format Float16. We investigate the Float16 performance on A64FX with ShallowWaters.jl, the first fluid circulation model that runs entirely with 16-bit arithmetic. The model implements techniques that address precision and dynamic range issues in 16 bits. The precision-critical time integration is augmented to include compensated summation to minimise rounding errors. Such a compensated time integration is as precise but faster than mixed-precision with 16 and 32-bit floats. As subnormals are inefficiently supported on A64FX the very limited range available in Float16 is 6·10-5 to 65,504. We develop the analysis-number format Sherlogs.jl to log the arithmetic results during the simulation. The equations in ShallowWaters.jl are then systematically rescaled to fit into Float16, using 97% of the available representable numbers. Consequently, we benchmark speedups of up to 3.8x on A64FX with Float16. Adding a compensated time integration, speedups reach up to 3.6x. Although ShallowWaters.jl is simplified compared to large Earth-system models, it shares essential algorithms and therefore shows that 16-bit calculations are indeed a competitive way to accelerate Earth-system simulations on available hardware.

How to cite: Klöwer, M., Hatfield, S., Croci, M., Düben, P. D., and Palmer, T.: Fluid simulations accelerated with 16 bits: Approaching 4x speedup on A64FX by squeezing ShallowWaters.jl into Float16, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3095, https://doi.org/10.5194/egusphere-egu22-3095, 2022.

EGU22-3109 | Presentations | ESSI2.7

Compressing atmospheric data into its real information content 

Milan Klöwer, Miha Razinger, Juan J. Dominguez, Peter D. Düben, and Tim Palmer

Hundreds of petabytes are produced annually at weather and climate forecast centres worldwide. Compression is essential to reduce storage and to facilitate data sharing. Current techniques do not distinguish the real from the false information in data, leaving the level of meaningful precision unassessed or often subjectively chosen. Many of the trailing mantissa bits in floating-point numbers occur independently with high information entropy, reducing the efficiency of compression algorithms. Here we define the bitwise real information content from information theory as the mutual information of bits in adjacent grid points. The analysis automatically determines a precision from the data itself, based on the separation of real and false information bits. Applied to data from the Copernicus Atmospheric Monitoring Service (CAMS), most variables contain fewer than 7  bits of real information per value and are highly compressible due to spatio-temporal correlation. Rounding bits without real information to zero facilitates lossless compression algorithms and encodes the uncertainty within the data itself. The removal of bits with high entropy but low real information allows us to minimize information loss but maximize the efficiency of the compression algorithms. All CAMS data are 17x compressed in the longitudinal dimension and relative to 64-bit floats, while preserving 99% of real information. Combined with four-dimensional compression using the floating-point compressor Zfp, factors beyond 60x are achieved, with no significant increase of the forecast error. For multidimensional compression it is generally advantageous to include as many highly correlated dimensions as possible. A data compression Turing test is proposed to optimize compressibility while minimizing information loss for the end use of weather and climate forecast data. 

How to cite: Klöwer, M., Razinger, M., Dominguez, J. J., Düben, P. D., and Palmer, T.: Compressing atmospheric data into its real information content, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3109, https://doi.org/10.5194/egusphere-egu22-3109, 2022.

EGU22-3230 | Presentations | ESSI2.7

Lossy compression in violent thunderstorm simulations: Lessons learned and future goals 

Leigh Orf and Kelton Halbert

Here we discuss our experiences with of ZFP lossy floating point compression in eddy resolving cloud modeling simulations of violent thunderstorms executed on the Blue Waters and Frontera supercomputers. Lossy compression has reduced our simulation data load by a factor of 20-100 from uncompressed. This savings enables us to save data at extremely high temporal resolution, up to the model's time step, the smallest possible temporal discretization. Further data savings is realized by only saving a subdomain of the entire simulation, and this is has opened the door to new approaches to analysis.  We will discuss the Lack Of a File System (LOFS) compressed format that model data is saved in, as well as conversion routines to create individual ZFP compressed NetCDF4 files for shaing with collaborators and for archiving. Further, we will discuss the effect of lossy compression on offline Lagrangian parcel analysis from LOFS data. Preliminary results suggest that high compression does not alter parcel paths considerably in cloud model simulation data over several minutes of integration as compared to uncompressed.

How to cite: Orf, L. and Halbert, K.: Lossy compression in violent thunderstorm simulations: Lessons learned and future goals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3230, https://doi.org/10.5194/egusphere-egu22-3230, 2022.

EGU22-3285 | Presentations | ESSI2.7

Accelerating the Lagrangian particle tracking in hydrologic modeling at continental-scale 

Chen Yang, Carl Ponder, Bei Wang, Hoang Tran, Jun Zhang, Jackson Swilley, Laura Condon, and Reed Maxwell

Unprecedented climate change and anthropogenic activities have induced increasing ecohydrological issues. Large-scale hydrologic modeling of water quantity is developing rapidly to seek solutions for those issues. Water-parcel transport (e.g., water age, water quality) is as important as water quantity to understand the changing water cycle. However, current scientific progress in water-parcel transport at large-scale is far behind that in water quantity. The known cause is the lack of powerful tools to handle observations and/or modeling of water-parcel transport at large-scale with high spatiotemporal resolutions. Lagrangian particle tracking based on integrated hydrologic modeling stands out among other methods because it accurately captures the water-parcel movements. Nonetheless, Lagrangian approach is computationally expensive, hindering its broad application in hydrologic modeling, particularly at large-scale. EcoSLIM, a grid-based particle tracking code, calculates water ages (e.g., evapotranspiration, outflow, and groundwater) and identifies source water composition (e.g., rainfall, snowmelt, and initial subsurface water), working seamlessly with the integrated hydrologic model ParFlow-CLM. EcoSLIM is written in Fortran and is originally parallelized by OpenMP (Open Multi-Processing) using shared CPU memory. As a result, we accelerate EcoSLIM by implementing it on distributed, multi-GPU platform using CUDA (Compute Unified Device Architecture) Fortran.

We decompose the modeling domain into subdomains. Each GPU is responsible for one subdomain. Particles moving out of a subdomain continue moving temporarily in halo grid-cells around the subdomain and then are transferred to the neighbor subdomains. Different transfer schemes are built to balance the simulation accuracy and the computing speed. Particle transfer leverages the CUDA-aware MPI (Message Passing Interface) to improve the parallel efficiency. Load imbalance among GPUs induced by irregular domain boundaries and heterogeneity of flow paths is observed. A load-balancing scheme, borrowed from Particle-In-Cell and modified based on the characteristics of EcoSLIM, is established. The simulation starts on a number of GPUs fewer than the total scheduled GPUs. The manager MPI process activates an idle GPU for a subdomain once the particle number on its current GPU(s) is over a specified threshold. Finally, all scheduled GPUs are enabled. Tests of the new code from catchment-scale (the Little Washita watershed), to regional-scale (the North China Plain), and then to continental-scale (the Continental US) using millions to billions of particles show significant speedup and great parallel performance. The parallelized EcoSLIM is a promising tool for the hydrologic community to accelerate our understanding of the terrestrial water cycle beyond the water balance in the changing world.

How to cite: Yang, C., Ponder, C., Wang, B., Tran, H., Zhang, J., Swilley, J., Condon, L., and Maxwell, R.: Accelerating the Lagrangian particle tracking in hydrologic modeling at continental-scale, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3285, https://doi.org/10.5194/egusphere-egu22-3285, 2022.

EGU22-3739 | Presentations | ESSI2.7

Environmental Data Science Book: a community-driven resource showcasing open-source Environmental science 

Alejandro Coca-Castro, Scott Hosking, and The Environmental Data Science Community

With the plethora of open data and computational resources available, environmental data science research and applications have accelerated rapidly. Therefore, there is an opportunity for community-driven initiatives compiling and classifying open-source research and applications across environmental systems (polar, oceans, forests, agriculture, etc). Building upon the Pangeo Gallery, we propose The Environmental Data Science book (https://the-environmental-ds-book.netlify.app), a community-driven online resource showcasing and supporting the publication of data, research and open-source developments in environmental sciences. The target audience and early adopters are i) anyone interested in open-source tools for environmental science; and ii) anyone interested in reproducibility, inclusive, shareable and collaborative AI and data science for environmental applications. Following FAIR principles, the resource provides multiple features such as guidelines, templates, persistent URLs and Binder to facilitate a fully documented, shareable and reproducible notebooks. The quality of the published content is ensured by a transparent reviewing process supported by GitHub related technologies. To date, the community has successfully published five python-based notebooks: two forest-, two wildfires/savanna- and one polar-related research. The notebooks consume common Pangeo stack e.g. intake, iris, xarray, hvplot for interactive visualisation and modelling from Environmental sensor data. In addition to constant feature enhancements of the GitHub repository https://github.com/alan-turing-institute/environmental-ds-book, we expect to increase inclusivity (multiple languages), diversity (multiple backgrounds) and activity (collaboration and coworking sessions) towards improving scientific software practises in the environmental science community.

How to cite: Coca-Castro, A., Hosking, S., and Community, T. E. D. S.: Environmental Data Science Book: a community-driven resource showcasing open-source Environmental science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3739, https://doi.org/10.5194/egusphere-egu22-3739, 2022.

EGU22-4566 | Presentations | ESSI2.7

Is there a correlation between the cloud phase and surface snowfall rate in GCMs? 

Franziska Hellmuth, Anne Claire Mireille Fouilloux, Trude Storelvmo, and Anne Sophie Daloz

Cloud feedbacks are a major contributor to the spread of climate sensitivity in global climate models (GCMs) [1]. Among the most poorly understood cloud feedbacks is the one associated with the cloud phase, which is expected to be modified with climate change [2]. Cloud phase bias, in addition, has significant implications for the simulation of radiative properties and glacier and ice sheet mass balances in climate models.  

In this context, this work aims to expand our knowledge on how the representation of the cloud phase affects snow formation in GCMs. Better understanding this aspect is necessary to develop climate models further and improve future climate predictions. 

This study will compare surface snowfall, ice, and liquid water content from the Coupled Model Intercomparison Project Phase 6 (CMIP 6) climate models (accessed through Pangeo) to the European Centre for Medium-Range Weather Forecast Re-Analysis 5 (ERA5) data from 1985 to 2014. We conduct statistical analysis at the annual and seasonal timescales to determine the biases in cloud phase and precipitation (liquid and solid) in the CMIP6 models and their potential connection between them. 

For the analysis, we use the Jupyter notebook on the CMIP6 analysis (https://github.com/franzihe/eosc-nordic-climate-demonstrator/blob/master/work/), which guides the user step by step. The use of the Pangeo.io intake package makes it possible to browse the CMIP6 online catalog for the required variables, models, and experiments and stores it in xarray dask datasets. Vertical variables in sigma pressure levels had to be interpolated to standard pressure levels as provided in ERA5. We also interpolated the horizontal and vertical variables to the exact horizontal grid resolution before calculating the climatology. 

A global comparison between the reanalysis (ERA5) and the CMIP6 models shows that models tend to underestimate the ice water path compared to the reanalysis even if most of them can reproduce some of the characteristics of liquid water content and snowfall. To better understand the link between biases in cloud phase and surface snowfall rate, we try to find a relationship between ice water path and surface snowfall in GCMs. Linear regressions within extratropical areas show a positive relationship between ice water content and surface snowfall in the reanalysis data, while CMIP6 models do not have these characteristics. 

  

[1] Zelinka, M. D., Myers, T. A., McCoy, D. T., Po-Chedley, S., Caldwell, P. M., Ceppi, P., et al. (2020). Causes of higher climate sensitivity in CMIP6 models. Geophysical Research Letters, 47, e2019GL085782. https://doi-org.ezproxy.uio.no/10.1029/2019GL085782 

[2] Bjordal, J., Storelvmo, T., Alterskjær, K. et al. Equilibrium climate sensitivity above 5 °C plausible due to state-dependent cloud feedback. Nat. Geosci. 13, 718–721 (2020). https://doi-org.ezproxy.uio.no/10.1038/s41561-020-00649-1 

 

Github: https://github.com/franzihe 

How to cite: Hellmuth, F., Fouilloux, A. C. M., Storelvmo, T., and Daloz, A. S.: Is there a correlation between the cloud phase and surface snowfall rate in GCMs?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4566, https://doi.org/10.5194/egusphere-egu22-4566, 2022.

EGU22-5709 | Presentations | ESSI2.7

Pangeo for everyone with Galaxy 

Anne Fouilloux, Yvan Le Bras, and Adele Zaini

Pangeo has been deployed on a number of diverse infrastructures and learning resources are available with for instance the Pangeo Tutorial Gallery (http://gallery.pangeo.io/repos/pangeo-data/pangeo-tutorial-gallery/index.html). However, knowledge of Python is necessary to develop or reuse applications with the Pangeo ecosystem which hinders its wider adoption and reduces potential inter-disciplinary collaborations. 

Our main objective is to reduce barriers for using the Pangeo ecosystem and allow everyone to understand the fundamental concepts behind Pangeo and offer a Pangeo deployment for teaching and developing reproducible, reusable and fully automated workflows.

Most Pangeo tutorials and examples use Jupyter notebooks but the gap between these “toy examples” and real complex applications is still huge: adopting best software practices for Jupyter notebooks and big applications is essential for reuse and automation of workflows.

Galaxy project is a worldwide community dedicated to making tools, workflows and infrastructures open and accessible to everyone. Each tool in Galaxy has a wrapper describing the tool itself along with the input and output parameters, citations, and possible annotations thanks to EDAM ontology. Galaxy workflows are also annotated and can contain any kind of Galaxy Tools, including interactive tools such as Pangeo notebooks. 

Galaxy is also accessible via a web-based interface. The platform is designed to be community and technology agnostic and has gained adoption in various communities, ranging from Climate Science and Biodiversity to Biology and Medicine. 

By combining Pangeo and Galaxy, we are providing access to the Pangeo ecosystem to everyone, including those who are not familiar with Python and we offer fully automated and annotated Pangeo “tools”. 

Two main set of tools are currently available in Galaxy:

  • Pangeo notebook (synced with Pangeo notebook with corresponding docker https://github.com/pangeo-data/pangeo-docker-images) 
  • Xarray tools to manipulate and visualise netCDF data from Galaxy Graphical User Interface.

Training material is being developed and  included in the Galaxy Training Network (https://training.galaxyproject.org/):

  • “Pangeo ecosystem 101 for everyone - Introduction to Xarray Galaxy Tools” where anyone can learn about Pangeo and its main concepts and try it out without using any command lines;
  • Pangeo Notebook in Galaxy - Introduction to Xarray:itl is very similar to “Xarray Tutorial” from Pangeo (http://gallery.pangeo.io/repos/pangeo-data/pangeo-tutorial-gallery/xarray.htm) but makes use of Galaxy Pangeo notebooks and offers a different entry point to Pangeo.

Galaxy Training Infrastructure as a Service (https://galaxyproject.eu/tiaas.html) with infrastructure at no cost is provided by Galaxy Europe for  teachers/instructors. It was used for the FORCeS eScience course “Tools in Climate Science: Linking Observations with Modeling” (https://galaxyproject.eu/posts/2021/11/13/tiaas-anne/) where about 30 students learned about Pangeo (see https://nordicesmhub.github.io/forces-2021/intro.html).

Galaxy Pangeo also contributes to the worldwide online training “GTN Smörgåsbord” (last event 14-18 March 2022, https://gallantries.github.io/posts/2021/12/14/smorgasbord2-tapas/) where everyone is welcome as a trainee, trainer or just observer! This will contribute to democratising Pangeo.

How to cite: Fouilloux, A., Le Bras, Y., and Zaini, A.: Pangeo for everyone with Galaxy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5709, https://doi.org/10.5194/egusphere-egu22-5709, 2022.

EGU22-6350 | Presentations | ESSI2.7

OSDYN: a new python tool for the analysis of high-volume ocean outputs. 

Valerie Garnier, Jean-Francois Le Roux, Justus Magin, Tina Odaka, Pierre Garreau, Martial Boutet, Stephane Raynaud, Claude Estournel, and Jonathan Beuvier

OSDYN (Observations and Simulations of the DYNamics) is a Python library that proposes diagnostics to explore the dynamics of the ocean and its interactions with the atmosphere and waves. Its main strengths are its genericity concerning the different types of netCDF files and its ability to handle large volumes of data.

Dedicated to large data sets such as in-situ, satellite, and numerical model observations, OSDYN is particularly powerful to manage different types of Arakawa-C grids and vertical coordinates (Nemo, Croco, Mars, Symphonie, WW3, MesoNH). Based on common Pangeo stack (xarray, dask, xgcm), OSDYN provides data readers that standardize the dimensions, coordinates, and variables names and properties of the datasets. Thus, all python diagnostics can be shared regardless of the model outputs.

Thanks to progress made using kerchunk and efforts on transforming metadata of Ifremer’s HPC center (auto-kerchunk), the reading of a large amount of netCDF files is fast and the selection of sub-domains or specific variables is almost immediate.

Jupyter notebooks will detail the implementation of three kinds of analyses. The first one focuses on climatologic issues. In order to compare modeled and satellite sea surface temperatures, the second one addresses spatial interpolation and comparison of data when some may be missing. Lastly, the third analysis provides an overview of how diagnostics describing the formation of deep water masses can be used from different data sets.

How to cite: Garnier, V., Le Roux, J.-F., Magin, J., Odaka, T., Garreau, P., Boutet, M., Raynaud, S., Estournel, C., and Beuvier, J.: OSDYN: a new python tool for the analysis of high-volume ocean outputs., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6350, https://doi.org/10.5194/egusphere-egu22-6350, 2022.

EGU22-7151 | Presentations | ESSI2.7

Storage growth mitigation through data analysis ready climate datasets using HDF5 Virtual Datasets 

Ezequiel Cimadevilla and Antonio S. Cofiño

Climate datasets are usually provided in separate files that facilitate dataset management in climate data distribution systems. In ESGF1 (Earth System Grid Federation) a time series of a variable is split into smaller pieces of data in order to reduce file size. Although this enhances usability for data management in the ESGF distribution system (i.e. file publishing, download, …), this demotes usability for data analysis. Usually, workflows need to pre-process and rearrange multiple files as a single data source, in order to obtain a data analysis dataset, involving data rewriting and duplication with the corresponding storage growth.

The mitigation of storage growth can be achieved by creating virtual views, allowing a number of actual datasets to be multidimensionally mapped together into a single multidimensional dataset that does not require rewriting data nor to consume additional storage. Due to the increasing interest in offering to climate researchers appropriate single data analysis datasets, some mechanisms have been or are being developed to tackle this issue, such as NcML (netCDF Markup Language), xarray/netCDF-4 Multiple File datasets and H5VDS. HDF5 Virtual Datasets3 (H5VDS) provide researchers with different views of interest of a compound dataset, without the cost of duplicating information, facilitating data analysis in an easy and transparent way.

In the climate community and in ESGF, netCDF is the standard data model and format for climate data exchange. netCDF-4 default storage format is HDF5, introducing into the netCDF library features from HDF5. This includes chunking2, compression2, virtual datasets and many other capabilities. H5VDS introduces a new dataset storage type that allows a number of multiple HDF5 (and netCDF-4) datasets to be mapped together into a single sliceable dataset via an interface layer. The datasets can be mixed in arbitrary combinations, based on range selection mapping to range selection on sources. This mapping allows mapping between different data types and to add, remove or modify existing metadata (i.e. datasets attributes), which usually it’s a common issue to access the data. 

In this work, examples of applications of H5VDS features are applied to CMIP6 climate simulations datasets from ESGF, in order to provide data analysis ready virtual datasets. Examples of common tools/libraries (i.e. netcdf-c, xarray, nco, cdo, …) illustrate the convenience of the proposed approach. Using H5VDS facilitates data analysis workflows by enabling climate researchers to focus on data analysis rather than data engineering tasks. Also, since the H5VDS is created at the storage layer, these datasets are transparent to the netCDF-4 library and existing applications can benefit from this feature.

References

[1] L. Cinquini et al., “The Earth System Grid Federation: An open infrastructure for access to distributed geospatial data,” Future Generation Computer Systems, vol. 36, pp. 400–417, 2014, doi: 10.1016/j.future.2013.07.002. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0167739X13001477. [Accessed: 16-Jan-2020]
[2] The HDF Group, “Chunking in HDF5”, 11-Feb.-2019. [Online]. Available: https://portal.hdfgroup.org/display/HDF5/Chunking+in+HDF5. [Accessed: 12-Jan.-2022]

[3] The HDF Group, “Virtual Dataset VDS”, 06-Apr.-2018. [Online]. Available: https://portal.hdfgroup.org/display/HDF5/Virtual+Dataset++-+VDS. [Accessed: 12-Jan.-2022]

Acknowledgements

This work it’s been developed under support from IS-ENES3 which is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824084.

How to cite: Cimadevilla, E. and Cofiño, A. S.: Storage growth mitigation through data analysis ready climate datasets using HDF5 Virtual Datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7151, https://doi.org/10.5194/egusphere-egu22-7151, 2022.

EGU22-7593 | Presentations | ESSI2.7

How to turn satellite data to insights at scale 

Basile Goussard

NetCarbon, a brand new french startup company, is offering farmers a free solution for measuring and monetizing their sequestered carbon to contribute towards carbon neutrality. This solution is relying on satellite data (Sentinel 2, Landsat 8 & PlanetScope) and open-source ecosystems such as the Pangeo software stack

 

The challenge in NetCarbon’s solution is the deployment of earth observation insights at scale. And be able to shift between cloud providers or on-premise architecture if needed. The best tool for us is up-to-now PANGEO.  

 

An example of our pangeo usage will be shown in the following three points.   

1°) Connection to satellite data / Extract 

2°) Processing satellite data at scale / Transform

3°) Saving the data within a data warehouse / Load

 

First, some of the building blocks to search for satellite data based on STAC will be shown. Moreover, the stackstac package will be tested to convert STAC into xarray, allowing researchers and companies to create their datacubes with all the metadata inside. 

 

The second part of the presentation will involve the computation layer. Indeed, computations algorithms like filtering by cloud cover, applying cloud mask, computing the land surface temperature, and applying an interpolation will be run. Land surface temperature is one data needed for the NetCarbon algorithm. The result of these previous steps will lead us to retrieve a dask computation graph. This computation graph will be run at scale within the cloud, based on Dask and Coiled. 

 

To conclude, the output of the processing part (spatial and temporal mean of the land surface temperature) will be displayed within a notebook and finally, the data will be loaded into a data warehouse (google bigquery). 

 

All the steps will be demonstrated in a reproducible notebook

How to cite: Goussard, B.: How to turn satellite data to insights at scale, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7593, https://doi.org/10.5194/egusphere-egu22-7593, 2022.

EGU22-7610 | Presentations | ESSI2.7

Distributing your GPU array computation in Python 

Jacob Tomlinson

There are many powerful libraries in the Python ecosystem for accelerating the computation of large arrays with GPUs. We have CuPy for GPU array computation, Dask for distributed computation, cuML for machine learning, Pytorch for deep learning and more. We will dig into how these libraries can be used together to accelerate geoscience workflows and how we are working with projects like Xarray to integrate these libraries with domain-specific tooling. Sgkit is already providing this for the field of genetics and we are excited to be working with community groups like Pangeo to bring this kind of tooling to the geosciences.

How to cite: Tomlinson, J.: Distributing your GPU array computation in Python, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7610, https://doi.org/10.5194/egusphere-egu22-7610, 2022.

The increase in Earth System Models (ESMs) capabilities is strongly linked to the amount of computing power and data storage capacity available. The scientific community requires increased model resolution, large numbers of experiments and ensembles to quantify uncertainty, increased complexity of ESMs (including additional components), and longer simulation periods compared to the current state of climate models. HPC is currently undergoing a major change as is the next generation of computing systems (‘exascale systems’). These challenges cannot be met by mere extrapolation but require radical innovation in several computing technologies and numerical algorithms. Most applications targeting exascale machines require some degree of rewriting to expose more parallelism, and many face severe strong-scaling challenges if they are to effectively progress to exascale, as is demanded by their science goals. 

 

However, the performance evaluation of the new models through the exascale path will also increase its complexity. We do need new approaches to ensure that the computational evaluation of this new generation of models is done correctly. Moreover, this evaluation will help in the computational analysis during the model’s development and ensure the maximum throughput possible in the moment that operational configurations such as CMIP are run.

 

CPMIP metrics are a universal set of metrics easy to collect, which provide a new way to study ESMs from a computational point of view. Thanks to the H2020 project IS-ENES3, we had a unique opportunity to exploit this new set of metrics to create a novel database based on CMIP6 experiments, using the different models and platforms available all across Europe.

 

The results and analysis are presented here, where both differences and similarities among the models can be observed on a variety of different hardware. Moreover, the current database is presented for different studies, such as the comparison of different models running similar configurations or the same model and configuration but executed on different platforms. All these possibilities create a unique context that has to be exploited by the community to improve the evaluation of the computational performance of the ESMs, using this information for future optimizations and preparing our models for the new exascale platforms. Eventually, general prescriptions on how to disseminate the work done are given, and the need for the community to undertake the use of CPMIP metrics both on actual and new generation's platform is presented.

How to cite: Acosta, M., Balaji, V., Palomas, S., and Paronuzzi, S.: CPMIP: Computational evaluation of the new era of complex Earth System Models. Multi-model results from CMIP6 and challenges for the exascale computing., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7869, https://doi.org/10.5194/egusphere-egu22-7869, 2022.

EGU22-8094 | Presentations | ESSI2.7

Approach to make an I/O server performance-portable across different platforms: OpenIFS-XIOS integration as a case study 

Jan Streffing, Xavier Yepes-Arbós, Mario C. Acosta, and Kim Serradell

Current Earth System Models (ESMs) produce a large amount of data due to an increase in the simulated complexity of the models and their demanded rising spatial resolution. With the exascale era approaching rapidly, an efficient I/O will be critical to sustain model throughput. The most commonly adopted approach in ESMs is the use of scalable parallel I/O solutions that are intended to minimize the overhead of writing data into the storage system. However, I/O servers with inline diagnostics introduce more complexity and many parameters that need to be tuned. This means that it is necessary to achieve an optimal trade-off between throughput and resource usage.

ESMs are usually run on different platforms which might have different architectural specifications: latency, bandwidth, number of cores and memory per node, file system, etc. In addition, a single ESM can run different configurations which require different amounts of resources, resolution, output frequency, number of fields, etc. Since each individual case is particular, the I/O server should be tuned accordingly to each platform and model configuration.

We present an approach to identify and tune a series of important parameters that should be considered in an I/O server. In particular, we focus on the XML Input/Output Server (XIOS) and we use it integrated with OpenIFS –an atmospheric general circulation model– as a case study. We do not only tune basic parameters such as number of XIOS servers, number of servers per node, type and frequency of post-processing operations, etc., but also specific ones such as XIOS buffer size, splitting of NetCDF files across I/O servers, Lustre striping, 2-level server mode of XIOS, etc.

The evaluation of different configurations on different machines proves that it is possible and necessary to find a proper setup for XIOS to achieve a good throughput using an adequate consumption of computational resources. In addition, the results show that the OpenIFS-XIOS integration is performant on the platforms evaluated. This suggests that the integration is portable, though it was initially developed for a specific platform.

How to cite: Streffing, J., Yepes-Arbós, X., C. Acosta, M., and Serradell, K.: Approach to make an I/O server performance-portable across different platforms: OpenIFS-XIOS integration as a case study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8094, https://doi.org/10.5194/egusphere-egu22-8094, 2022.

EGU22-8762 | Presentations | ESSI2.7

Lossy Data Compression and the Community Earth System Model 

Allison H. Baker, Dorit M. Hammerling, Alex Pinard, and Haiying Xu

Climate models such as the Community Earth System Model (CESM) typically produce enormous amounts of output data, and storage capacities have not increased as rapidly as processor speeds over the years. As a result, the cost of storing huge data volumes has become increasingly problematic and has forced climate scientists to make hard choices about which variables to save, data output frequency, simulation lengths, or ensemble sizes, all of which can negatively impact science objectives.  Therefore, we have been investigating lossy data compression techniques as a means of reducing data storage for CESM.  Lossy compression, by definition, does not exactly preserve the original data, but it achieves higher compression rates and subsequently smaller storage requirements. However, as with any data reduction approach, we must exercise extreme care when applying lossy compression to climate output data to avoid introducing artifacts in the data that could affect scientific conclusions.  Our focus has been on better understanding the effects of lossy compression on spatio-temporal climate data and on gaining user acceptance via careful analysis and testing. In this talk, we will describe the challenges and concerns that we have encountered when compressing climate data from CESM and will discuss developing appropriate climate-specific metrics and tools to enable scientists to evaluate the effects of lossy compression on their own data and facilitate optimizing compression for each variable.  In particular, we will present our Large Data Comparison for Python (LDCPy) package for visualizing and computing statistics on differences between multiple datasets, which enables climate scientists to discover potentially relevant compression-induced artifacts in their data.  Additionally, we will demonstrate the usefulness of an alternative to the popular SSIM that we developed, called the Data SSIM (DSSIM), that can be applied directly to the floating-point data in the context of evaluating differences due to lossy compression on large volumes of simulation data.

How to cite: Baker, A. H., Hammerling, D. M., Pinard, A., and Xu, H.: Lossy Data Compression and the Community Earth System Model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8762, https://doi.org/10.5194/egusphere-egu22-8762, 2022.

EGU22-9152 | Presentations | ESSI2.7

WOAST : an Xarray package applying Wavelet Scattering Transform to geophysical data 

Edouard Gauvrit, Jean-Marc Delouis, Marie-Noëlle Bouin, and François Boulanger

Ocean plays a key role in regulating climate through the dynamical coupling between sea surface and the atmosphere. Understanding this coupling is a key issue in climate change modeling, but an adapted statistical representation is still lacking. A strong limitation comes from the non-Gaussianities existing inside a wind over waves surface layer, where wind flows are constrained by the sea state and the swell. We seek an approach to describe statistically the couplings across scales, which is poorly measured by the power spectrum. Recent developments in data science provide new tools as the Wavelet Scattering Transform (WST), which gives a low-variance statistical description of non-Gaussian processes and offers to go beyond the power spectrum representation. The latter is blind to position consistency between scales. To find the methodology, we applied the WST on 1D anemometer time series and 2D atmospheric simulations (LES) and compared them with well known statistical information. These analyses were made possible thanks to the development of WOAST (Wavelet Ocean-Atmosphere Scattering Transform) software. Computation of WST is mathematically embarrassingly parallel and the time consumption is mainly dominated by data access and memory management. Our preliminary geophysical analysis using WOAST and its efficiency of extracting unknown properties of intermittent processes will be shown through a jupyter notebook example. This work is part of the Astrocean project supported by 80Prime grants (CNRS).

How to cite: Gauvrit, E., Delouis, J.-M., Bouin, M.-N., and Boulanger, F.: WOAST : an Xarray package applying Wavelet Scattering Transform to geophysical data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9152, https://doi.org/10.5194/egusphere-egu22-9152, 2022.

EGU22-9153 | Presentations | ESSI2.7

Exploring the SZ lossy compressor use for the XIOS I/O server 

Xavier Yepes-Arbós, Sheng Di, Kim Serradell, Franck Cappello, and Mario C. Acosta

Earth system models (ESMs) have increased the spatial resolution to achieve more accurate solutions. As a consequence, the number of grid points increases dramatically, so an enormous amount of data is produced as simulation results. In addition, if ESMs manage to take advantage of the upcoming exascale computing power, their current data management system will become a bottleneck as the data production will grow exponentially.

The XML Input/Output Server (XIOS) is an MPI parallel I/O server designed for ESMs to efficiently post-process data inline as well as read and write data in NetCDF4 format. Although it offers a good performance in terms of computational efficiency for current resolutions, this could change for larger resolutions since the XIOS performance is very dependent on the output size. To address this problem we test the HDF5 compression in order to reduce the size of the data so that both I/O time and storage footprint can be improved. However, the default lossless compression filter of HDF5 does not provide a good trade-off between size reduction and computational cost. 

Alternatively, we consider using lossy compression filters that may allow reaching high compression ratios and enough compression speed to considerably reduce the I/O time while keeping high accuracy. In particular, we are exploring the feasibility of using the SZ lossy compressor developed by the Argonne National Laboratory (ANL) to write highly compressed NetCDF files through XIOS. As a case study, the Open Integrated Forecast System (OpenIFS) is used, an atmospheric general circulation model that can use XIOS to output data.

How to cite: Yepes-Arbós, X., Di, S., Serradell, K., Cappello, F., and C. Acosta, M.: Exploring the SZ lossy compressor use for the XIOS I/O server, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9153, https://doi.org/10.5194/egusphere-egu22-9153, 2022.

EGU22-9741 | Presentations | ESSI2.7

Improving lossy compression for climate datasets with SZ3 

Franck cappello, Sheng Di, and Robert Underwood

The projection into 2030 of the climate data volume increase brings an important challenge to the climate science community. This is particularly true for the CMIP7 that is projected to need about an Exabyte of storage capacity. Error-bounded lossy compression is explored as a potential solution to the above problem by different climate research teams. Several lossy compression schemes have been proposed leveraging different forms of decorrelation (transforms, prediction, HoSVD, DNN), quantization (linear, non-linear, vector), and encoding (dictionary-based, variable length, etc.) algorithms. Our experience with different applications shows that the compression methods often need to be customized and optimized to fit the specificities of the datasets to compress and the user requirements on the compression quality, ratio, and throughput. However, none of the existing lossy compression software for scientific data has been designed to be customizable. To address this issue, we developed SZ3, an innovative customizable, modular compression framework. SZ3 is a full C++ refactoring of SZ2 enabling the specialization, addition, or removal of each stage of the lossy compression pipeline to fit the specific characteristics of the datasets to compress and the use-case requirements. This extreme flexibility allows adapting SZ3 to many different use-cases, from ultra-high compression for visualization to ultra-high-speed compression between the CPU (or GPU) and the memory. Thanks to its unique set of features: customization, high compression ratio, high compression throughput, and excellent accuracy preservation, SZ3 won a 2021 R&D100 award. In this presentation, we present SZ3 and a new data prediction-based decorrelation method that significantly improves the compression ratios for climate datasets over the state-of-the-art lossy compressors, while preserving the same data accuracy. Experiments based on CESM datasets show that SZ3 can lead to up to 300% higher compression ratios than SZ2 with the same compression error bound and similar compression throughput.

How to cite: cappello, F., Di, S., and Underwood, R.: Improving lossy compression for climate datasets with SZ3, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9741, https://doi.org/10.5194/egusphere-egu22-9741, 2022.

EGU22-9948 | Presentations | ESSI2.7

Exploring Lossy Compressibility through Statistical Correlations of Geophysical Datasets 

Julie Bessac, David Krasowksa, Robert Underwood, Sheng Di, Jon Calhoun, and Franck Cappello

Lossy compression plays a growing role in geophysical and other computer-based simulations where the cost of storing their output data on large-scale systems can span terabytes and even petabytes in some cases. Using error-bounded lossy compression reduces the amount of storage for each simulation; however, there is no known bound for the upper limit on lossy compressibility for a given dataset. Correlation structures in the data, choice of compressor and error bound are factors allowing larger compression ratios and improved quality metrics. Analyzing these three factors provides one direction towards quantifying limits of lossy compressibility. As a first step, we explore statistical methods to characterize correlation structures present in several climate simulations and their relationships, through functional regression models, to compression ratios. In particular, we show results for climate simulations from the Community Earth System Model (CESM) as well as for hurricanes simulations from Hurricane-ISABEL from IEEE Visualization 2004 contest, compression ratios of the widely used lossy compressors for scientific data SZ, ZFP and MGARD exhibit a logarithmic dependence to the global and local correlation ranges when combined with information on the variability of the considered fields through the variance or gradient magnitude. Further works will focus on providing a unified characterization of these relationships across compressors and error bounds. This consists of a first step towards evaluating the theoretical limits of lossy compressibility used to eventually predict compression performance and adapt compressors to correlation structures present in the data. 

How to cite: Bessac, J., Krasowksa, D., Underwood, R., Di, S., Calhoun, J., and Cappello, F.: Exploring Lossy Compressibility through Statistical Correlations of Geophysical Datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9948, https://doi.org/10.5194/egusphere-egu22-9948, 2022.

EGU22-10006 | Presentations | ESSI2.7

Scaling and performance assessment of TSMP under CPU-only and CPU-GPU configurations 

Daniel Caviedes-Voullième, Jörg Benke, Ghazal Tashakor, Stefan Poll, and Ilya Zhukov

Multiphysics Earth system models are potentially good candidates for progressive porting of modules to run on accelerator hardware. Typically, these models have an inherently modular design to cope with the variety of numerical formulations and computational implementations required for the range of physical processes they represent. Progressively porting modules or submodels to accelerators such as GPUs implies that models must run on heterogeneous hardware. Foreseeably, exascale systems will make use of heterogeneous hardware, and therefore, exploring early on such heterogeneous configurations is of importance and a challenge.

The Terrestrial Systems Modelling Platform (TSMP) is a scale-consistent, highly modular, massively parallel, fully integrated soil-vegetation-atmosphere modelling system. Currently, TSMP is based on the COSMO atmospheric model, the CLM land surface model, and the ParFlow hydrological model, linked together by means of the OASIS3-MCT library.

Recently, ParFlow was ported to GPU, therefore enabling the possibility of running TSMP under a heterogeneous configuration, that is COSMO and CLM running on CPUs, and ParFlow running on GPUs. The different computational demands of each submodel inherently result in non-trivial load balancing across the submodels. This has been addressed by studying the performance and scaling properties of the system for specific problems of interest. The new heterogeneous configurations prompts a re-assessment of load balancing, performance and scaling, in order to identify optimal computational resource configurations, and re-evaluate the bottlenecks and inefficiencies that the heterogeneous model system can have.

In this contribution, we present first results on performance and scaling assessment of the heterogeneous TSMP, compared to its performance under homogeneous (CPU-only) configurations. We study strong and weak scaling, for different problem sizes, and evaluate parallel efficiency and power consumption, for homogeneous and heterogeneous jobs on the JUWELS supercomputer, and on the experimental DEEP-Cluster, both at the Jülich Supercomputing Centre. Additionally, we explore profiles and traces of selected cases, both on homogeneous and heterogeneous runs, to identify MPI communication bottlenecks and root causes of the load balancing issue.  

How to cite: Caviedes-Voullième, D., Benke, J., Tashakor, G., Poll, S., and Zhukov, I.: Scaling and performance assessment of TSMP under CPU-only and CPU-GPU configurations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10006, https://doi.org/10.5194/egusphere-egu22-10006, 2022.

EGU22-10774 | Presentations | ESSI2.7

Understanding the effects of Modern Lossless and Lossy Compressors on the Community Earth Science Model 

Robert Underwood, Sheng Di, and Franck Cappello

Large scale climate simulations such as the Community Earth Science Model (CESM) produce enormous volumes of data per run. Transferring and storing this volume of data can be challenging leading researchers to consider data compression in order to mitigate the performance, monetary and environmental costs. In this work, we survey 8 methods ranging from higher-order SVD, multigrid, transform, and prediction based lossy compressors as well as specialized floating point lossless and lossy compressors and general lossless compressors to determine which methods are most effective at reducing the storage footprint.  We consider four components (atmosphere, ice, land, and ocean) within CESM taking into account the stringent quality thresholds required to preserve the integrity of climate research data. Our work goes beyond existing studies of compressor performance by considering these newer compression techniques, and by accounting for the candidate quality thresholds identified in prior work by Hammerling et al.  This provides a more realistic picture of the performance of lossy compression methods relative to lossless compression methods subject to each of these constraints with up to a 5.2x improvement over the leading lossless compressor and 21x over no compression. Our work features an automated method to automatically identify a configuration that satisfies the quality requirements for the lossy compressors that is agnostic to compressor implementations. 

How to cite: Underwood, R., Di, S., and Cappello, F.: Understanding the effects of Modern Lossless and Lossy Compressors on the Community Earth Science Model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10774, https://doi.org/10.5194/egusphere-egu22-10774, 2022.

EGU22-10919 | Presentations | ESSI2.7

The Pilot Lab Exascale Earth System Modelling 

Catrin I. Meyer and the PilotLab ExaESM Team

The Pilot Lab Exascale Earth System Modelling (PL-ExaESM) is a “Helmholtz-Incubator Information & Data Science” project and explores specific concepts to enable exascale readiness of Earth System models and associated work flows in Earth System science. PL-ExaESM provides a new platform for scientists of the Helmholtz Association to develop scientific and technological concepts for future generation Earth System models and data analysis systems. Even though extreme events can lead to disruptive changes in society and the environment, current generation models have limited skills particularly with respect to the simulation of these events. Reliable quantification of extreme events requires models with unprecedentedly high resolution and timely analysis of huge volumes of observational and simulation data, which drastically increase the demand on computing power as well as data storage and analysis capacities. At the same time, the unprecedented complexity and heterogeneity of exascale systems, will require new software paradigms for next generation Earth System models as well as fundamentally new concepts for the integration of models and data. Specifically, novel solutions for the parallelisation and scheduling of model components, the handling and staging of huge data volumes and a seamless integration of information management strategies throughout the entire process-value chain from global Earth System simulations to local scale impact models are being developed in PL-ExaESM. The potential of machine learning to optimize these tasks is investigated. At the end of the project, several program libraries and workflows will be available, which provide the basis for the development of next generation Earth System models.

In the PL-ExaESM, scientists from 9 Helmholtz institutions work together to address 5 specific problems of exascale Earth system modelling:

  • Scalability: models are being ported to next-generation GPU processor technology and the codes are modularized so that computer scientists can better help to optimize the models on new hardware.
  • Load balancing: asynchronous workflows are being developed to allow for more efficient orchestration of the increasing model output while preserving the necessary flexibility to control the simulation output according to the scientific needs.
  • Data staging: new emerging dense memory technologies allow new ways of optimizing I/O operations of data-intensive applications running on HPC clusters and future Exascale systems.
  • System design: the results of dedicated performance tests of Earth system models and Earth system data workflows are analysed in light of potential improvements of the future exascale supercomputer system design.
  • Machine learning: modern machine learning approaches are tested for their suitability to replace computationally expensive model calculations and speed up the model simulations or make better use of available observation data.

How to cite: Meyer, C. I. and the PilotLab ExaESM Team: The Pilot Lab Exascale Earth System Modelling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10919, https://doi.org/10.5194/egusphere-egu22-10919, 2022.

EGU22-11212 | Presentations | ESSI2.7

Lightweight embedded DSLs for geoscientific models. 

Zbigniew Piotrowski, Daniel Caviedes Voullieme, Jaro Hokkanen, Stefan Kollet, and Olaf Stein

On the map of the ESM research and operational software efforts, a notable area is occupied by the mid-size codes that benefit from established code design and user base and are developed by the domain scientists. Contrary to the major operational frameworks and newly established software projects, however, developers of such codes cannot easily benefit from novel solutions providing performance portability, nor have access to software engineering teams capable to perform full code rewrite, aiming at novel hardware architectures. While evolving accelerator programming paradigms like CUDA or OpenACC enable reasonably fast progress towards execution on heterogenous architectures, they do not offer universal portability and immediately impair code readability and maintainability. In this contribution we report on a lightweight embedded Domain Specific Language (eDSL) approach that enables legacy CPU codes to execute on GPU. In addition, it is minimally invasive  and maximizes code readability and developer productivity.  In the implementation, the eDSL serves as a front end for hardware dependent programming models, such as CUDA. In addition, performance portability can be achieved efficiently by implementing parallel execution and memory abstraction programming models, such as Kokkos as a backend. We evaluate the adaptation process and computational performance of the two established geophysical codes: the ParFlow hydrologic model written in C, and the Fortran-based dwarf encapsulating  MPDATA transport algorithm. Performance portability is demonstrated in the case of ParFlow. We present scalability results on state-of-the-art AMD CPUs and NVIDIA GPUs of JUWELS booster supercomputer. We discuss the advantages and limitations of the proposed approach in the context of other direct and DSL-based strategies allowing for exploitation of the modern accelerator-based computing platforms.

How to cite: Piotrowski, Z., Caviedes Voullieme, D., Hokkanen, J., Kollet, S., and Stein, O.: Lightweight embedded DSLs for geoscientific models., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11212, https://doi.org/10.5194/egusphere-egu22-11212, 2022.

EGU22-11595 | Presentations | ESSI2.7 | Highlight

Pangeo for geolocating fish using biologging data 

Justus Magin, Mathiew Woillez, Antoine Queric, and Tina odaka

In biologging, a small device attached to an animal is used to track its behaviour and environment. This data enables biologists to gain a better understanding of its movement, its preferred habitats, and the environmental conditions it needs to thrive, all of which is essential for the future protection of natural resources. For that, it is crucial to have georeferenced data of biological processes, such as fish migration, over a spatial and temporal range.

Since it is challenging to track fish directly in the water, models have been developed to geolocate fish from the high resolution temperature and pressure time series obtained from the data storage tag. In particular, reconstructing the trajectories of seabass using the temporal temperature changes obtained from biologging devices has been studied since 2010 (https://doi.org/10.1016/j.ecolmodel.2015.10.024). These fish tracks are computed based on the likelihood of the temperature data obtained from the fish tag and reference geoscience data such as satellite observations and ocean physics model output. A high temporal and spatial resolution of the reference data plays a key role in the quality of the fish trajectories. However, the size and accessibility of these data sets as well as the computing power required to process high resolution data remain technical barriers.

As the Pangeo ecosystem has been developed  to solve such challenges in geoscience, we can take advantage of it in biologging. We use libraries such as intake, kerchunk, and fsspec to quickly load the data, xarray, pint, and dask to compute and hvplot and jupyter to display the results. The pangeo software stack enables us to easily access the data and compute high resolution fish tracks in a scalable and interactive manner. 

How to cite: Magin, J., Woillez, M., Queric, A., and odaka, T.: Pangeo for geolocating fish using biologging data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11595, https://doi.org/10.5194/egusphere-egu22-11595, 2022.

EGU22-11729 | Presentations | ESSI2.7

Intercomparison of basin-to-global scale submesoscale-permitting ocean models at SWOT cross-overs 

Julien Le Sommer and Takaya Uchida and the SWOT Adopt-A-Crossover Ocean Model Intercomparison Project Team

With an increase in computational power, ocean models with kilometer-scale resolution have emerged over the last decade. Using these realistic simulations, we have been able to quantify the energetic exchanges between spatial scales and inform the design of eddy parametrizations. The increase in resolution, however, has drastically increased model outputs, making it difficult to transfer and analyze the data. The realism of individual models in representing the energetics down to numerical dissipation has also come into question. Here, we showcase a cloud-based analysis framework proposed by the Pangeo Project that aims to tackle such distribution and analysis challenges. We analyze seven submesoscale permitting simulations all on the cloud at a crossover region of the upcoming SWOT altimeter mission near the Gulf Stream separation. The models used in this study are based on the NEMO, CROCO, MITgcm, HYCOM, FESOM and FIO-COM code bases. The cloud-based analysis framework: i) minimizes the cost of duplicating and storing ghost copies of data, and ii) allows for seamless sharing of analysis results amongst collaborators. In this poster, we will describe the framework and provide preliminary results (e.g. spectra, vertical buoyancy flux, and how it compares to predictions from the mixed-layer instability parametrization). Basin-to-global scale, submesoscale-permitting models are still at their early stage of development ; their cost and carbon footprints are also rather large. It would, therefore, benefit the community to compile the different model configurations for future best practices. We also believe that an emphasis on data analysis strategies would be crucial for improving the models themselves.



How to cite: Le Sommer, J. and Uchida, T. and the SWOT Adopt-A-Crossover Ocean Model Intercomparison Project Team: Intercomparison of basin-to-global scale submesoscale-permitting ocean models at SWOT cross-overs, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11729, https://doi.org/10.5194/egusphere-egu22-11729, 2022.

EGU22-12099 | Presentations | ESSI2.7

A vision and strategy to revamp ESM workflows at DKRZ 

Karsten Peters-von Gehlen, Ivonne Anders, Daniel Heydebreck, Christopher Kadow, Florian Ziemen, and Hannes Thiemann

The German Climate Computing Center (DKRZ) is an established topical IT service provider serving the needs of the German climate science community and their associated partners. At DKRZ, climate researchers have the means available to cover every aspect of the research life cycle, ranging from planning, model development and testing, model execution on the in-house HPC cluster (16 PFlops mainly CPU-based, 130 PB disk storage), data analysis (batch jobs, Jupyter, Freva), data publication and dissemination via the Earth System Grid Federation (ESGF) as well as long-term data preservation either at the project-level (little curation) or in the CoreTrustSeal certified World Data Center for Climate (WDCC) (extensive curation along the FAIR data principles). A plethora of user support services offered by domain-expert staff complement DKRZ’s portfolio.

 

With the new HPC system coming online in early 2022 and a number of funded and to-be funded projects exploiting the available computational resources for conducting e.g. global storm-resolving (grid spacing O(1-3km)) simulations on climatic timescales, the current interplay DKRZ’s services needs to be revisited to devise a unified workflow that will be able to handle the upcoming challenges. 

 

This is why the above mentioned projects will supply a significant amount of funds to conceive a framework to efficiently orchestrate the entire model development, model execution and data handling workflow at DKRZ in close collaboration with the climate science community.

 

In this contribution, we will detail our vision of a revamped and versatile ESM orchestration framework at DKRZ. Currently, this vision is based on having the orchestration performed by the Freva System (http://doi.org/10.5334/jors.253), in which users will be able to kick-off model compilation, compute and analysis jobs. Furthermore, Freva enables seamless provenance tracking of the entire workflow. Together with the implementation of data publication, long-term archiving and data dissemination workflows, the envisioned system provides a complete package of FAIR Digital Objects (FDOs) to researchers and allows for reproducibility, transparency and reduction of data redundancy. 

How to cite: Peters-von Gehlen, K., Anders, I., Heydebreck, D., Kadow, C., Ziemen, F., and Thiemann, H.: A vision and strategy to revamp ESM workflows at DKRZ, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12099, https://doi.org/10.5194/egusphere-egu22-12099, 2022.

EGU22-13028 | Presentations | ESSI2.7

Using a Pangeo platform on Azure to tackle global environmental challenges 

Timothy Lam, Alberto Arribas, Gavin Shaddick, Theo McCaie, and Jennifer Catto

The Pangeo project enables interactive, reproducible and scalable environmental research to be carried out using an integrated data-computational platform. Here we demonstrate a few examples that utilise a Pangeo platform on Microsoft Azure supported by the Met Office where global environmental challenges are explored and tackled collaboratively. They include: (1) Analysing and quantifying drivers of low rainfall anomalies during boreal summer in Indonesian Borneo using causal inference and causal network to identify key teleconnections, and their possible changes under a warming climate, which will contribute to seasonal forecasting efforts to strengthen prevention and control of drought and fire multihazards over peatlands in the study region; (2) Quantifying and communicating uncertainty in volcanic ash forecasts; and (3) Exploring the cascading effects that follow the degradation and recovery of Caribbean coral reefs.

How to cite: Lam, T., Arribas, A., Shaddick, G., McCaie, T., and Catto, J.: Using a Pangeo platform on Azure to tackle global environmental challenges, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13028, https://doi.org/10.5194/egusphere-egu22-13028, 2022.

EGU22-13193 | Presentations | ESSI2.7

CliMetLab and Pangeo use case: Machine learning data pipeline for sub-seasonal To seasonal prediction (S2S) 

florian pinault, Aaron Spring, Frederic Vitart, and Baudouin Raoult

As machine learning algorithms are being used more and more prominently in the meteorology and climate domains, the need for reference datasets has been identified as a priority. Moreover, boilerplate code for data handling is ubiquitous in scientific experiments. In order to focus on science, climate/meteorology/data scientists need generic and reusable domain-specific tools. To achieve these goals, we used the plugin based CliMetLab python package along with many packages listed by Pangeo.  


Our use case consists in providing data for machine learning algorithms in the context of the sub-seasonal to seasonal (S2S) prediction challenge 2021. The data size is about 2 Terabytes of model predictions from three different models. We experimented with providing data in multiple formats: Grib, NetCDF, and Zarr. A Pangeo recipe (using the python package pangeo_forge_recipes) was used to generate Zarr data (relying heavily on xarray and dask for parallelisation). All three versions of the S2S data have been stored on an S3 bucket located on the ECMWF European Weather Cloud (ECMWF-EWC). 


CliMetLab aims at providing a simple interface to access climate and meteorological datasets, seamlessly downloading and caching data, converting to xarray datasets or panda dataframes, plotting data, feed them into machine learning frameworks such as tensorflow or pytorch. CliMetLab is open-source and still a Beta version (https://climetlab.readthedocs.io). The main target platform of CliMetLab is Jupyter notebooks. Additionally, a CliMetLab plugin allows shipping dataset-specific code along with a well-defined published dataset. Taking advantage of the CliMetLab tools to minimize the boilerplate code, a plugin has been developed for S2S data as a companion python package of the dataset.

How to cite: pinault, F., Spring, A., Vitart, F., and Raoult, B.: CliMetLab and Pangeo use case: Machine learning data pipeline for sub-seasonal To seasonal prediction (S2S), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13193, https://doi.org/10.5194/egusphere-egu22-13193, 2022.

EGU22-13259 | Presentations | ESSI2.7

Adding Quantization to the NetCDF C and Fortran Libraries to Enable Lossy Compression 

Edward Hartnett and Charles Zender

The increasing volume of Earth science data sets continue to present challenges for large data producers. In order to support lossy compression in the netCDF C and Fortran libraries, we have added a quantize feature for netCDF floating point variables. When the quantize feature is enabled, the data creator specifies the number of significant digits. As data are written, the netCDF libraries apply a quantization algorithm which guarantees that the number of significant digits (for BitGroom and Granular BitRound algorithms) or bits (for BitRound algorithm) will be preserved, while setting unneeded bits to a constant value. This allows zlib lossless compression (or any other lossless compression) to achieve better and faster compression.

How to cite: Hartnett, E. and Zender, C.: Adding Quantization to the NetCDF C and Fortran Libraries to Enable Lossy Compression, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13259, https://doi.org/10.5194/egusphere-egu22-13259, 2022.

EGU22-13542 | Presentations | ESSI2.7 | Highlight

Climate change adaptation digital twin to support decision making 

Jenni Kontkanen, Pekka Manninen, Francisco Doblas-Reyes, Sami Niemelä, and Bjorn Stevens

Climate change will have far reaching impacts on human and natural systems during the 21st century. To increase the understanding of the present and future climate impacts and build resilience, improved Earth system modelling is required. The European Commission Destination Earth (DestinE) initiative aims to contribute to this by developing high precision digital twins (DTs) of the Earth. We present our solution to a climate-change adaptation DT, which is one of the two DTs developed during the first phase of DestinE. The objective of the climate change adaptation DT is to improve the assessment of the impacts of climate change and different adaptation actions at regional and national levels over multi-decadal timescales. This will be achieved by using two storm- and eddy-resolving global climate models, ICON (Icosahedral Nonhydrostatic Weather and Climate Model) and IFS (Integrated Forecasting System). The models will be run at a resolution of a few km on pre-exascale LUMI and MareNostrum5 supercomputers, which are flagship systems of the European High Performance Computing Joint Undertaking (EuroHPC JU) network. Following a radically different approach, climate simulations will be combined with a set of impact models, which enables assessing impacts on different sectors and topics, such as forestry, hydrology, cryosphere, energy, and urban areas. The end goal is to create a new type of climate simulations, in which user requirements are an integral part of the workflow, and thus adaptation solutions can be effectively deployed.

How to cite: Kontkanen, J., Manninen, P., Doblas-Reyes, F., Niemelä, S., and Stevens, B.: Climate change adaptation digital twin to support decision making, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13542, https://doi.org/10.5194/egusphere-egu22-13542, 2022.

EGU22-13556 | Presentations | ESSI2.7

Atmospheric Retrievals in a Modern Python Framework 

Mario Echeverri Bautista, Maximilian Maahn, Anton Verhoef, and Ad Stoffelen

Modern Machine Learning (ML) techniques applied in atmospherical modeling rely heavily on two
aspects: good quality and good coverage observations. Among others, Satellite
Radiometer (SR) measuremens (Radiances or Brightness Temperatures) offer an excellent trade off
between such aspects; moreover SR observations have been providing quite stable Fundamental Cli-
mate Data Records (FCDR) for years and are expected to continue to do so in the following decades.
This work presents a framework for SR retrievals that uses modern ML standard packages from
the Scipy and Pangeo ecosystems; moreover, our retrieval scheme leverage the powerful
capabilites provided by NWPSAF’s RTTOV and its Python wrapper.
In terms of retrievals we stand on the shoulders of Bayesian Estimation by using Optimal Estima-
tion (OE), popularized by Rodgers for 1D atmospherical retrievals; we use pyOpEst
which is an open source package developed by Maahn. PyOptimalEstimation is structured
following an Object Oriented design, which makes it portable and highly maintainable.

The contribution presented here ranges from the scientific software design aspects, algorithmic
choices, open source contributions, processing speed and scalability; furthermore, simple but effi-
cient techniques such as cross-validation were used to evaluate different metrics; for initial test-
ing we have used NWPSAF’s model data and observation error covariances from SR literature.

The open source and community development philosophy are two pillars of this work. Open source
allows a transparent, concurrent and continuous development while community development brings
together domain experts, software developers and scientists in general; these two ideas allow us to
both profit from already developed and well supported tools (e.g. Scipy and Pangeo) and contribute
for others whose applications might benefit. This methodology has been successfully used all over the
Data Science and ML universe and we believe that the Earth Observation (EO) community would highly benefit in terms of streamlining development and benchmarking of new solutions. Practical examples of success can be found in the Pytroll community.

Our work in progress is directly linked to present and near future requirements by Earth Observa-
tion, in particular the incoming SR streams of data (for operational purposes) is increasing fast
and by orders of magnitude. Missions like the EUMETSAT Polar System-Second Generation (EPS-
SG, 2023) or the Copernicus Microwave Imager Radiometer (CIMR, 2026) will require scalability
and flexibility from the tools to digest such flows of data. We will discuss and show how operational
tools can take advantage of the enormous community based developments and standards and become
game changers for EO.

How to cite: Echeverri Bautista, M., Maahn, M., Verhoef, A., and Stoffelen, A.: Atmospheric Retrievals in a Modern Python Framework, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13556, https://doi.org/10.5194/egusphere-egu22-13556, 2022.

Indexed 3D Scene Layers (I3S) is an open format for streaming and storing massive amounts of heterogeneously distributed geospatial content. Adopted as the first OGC (Open Geospatial Consortium) Community Standard for 3D streaming and storage since 2017, I3S has been rapidly evolving to capture new use cases and techniques, advancing geospatial visualization and analysis. I3S enables the efficient transmission of various 3D geospatial data types, including discrete 3D objects with attributes, integrated surface meshes and point cloud data covering vast geographic areas as well as highly detailed BIM (Building Information Model) content, to web browsers, mobile apps, and desktop.


In this session, we’ll explore various aspects of I3S, including the latest update to the standard, OGC I3S Version 1.2, that brought dramatic improvements in performance and scalability. We’ll describe and demonstrate various I3S features, such as paged node access pattern – a compact Bounding Volume Hierarchy (BVH) representation that significantly reduces client-server traffic, a more compact geometry layout – using a well-known quantization encoding (Draco), advanced material definitions property that supports PBR (physically based rendering of materials), as well as support for KTX2 (Basis) compressed textures – reducing the compressed texture size by over 60%. We’ll also demonstrate collaborative & research work done to dramatically improve the speed of KTX2/Basis creation leveraging both the CPU and GPU – contributions that benefit both the geospatial and 3d graphics communities.

We will also further demonstrate various examples of the different layer types and profiles that are supported in I3S and how the data structure and organization help to efficiently store segmentation/classification information as well as triangle/point level attribution. Technological advancements in 3D graphics, data structuring, mesh and texture compression, efficient client-side filtering and so forth have significantly contributed to a paradigm shift in how geospatial content is created and disseminated, regardless of size and scale. Formats such as I3S now allow 3d content to be authored/created once and be efficiently consumed in various platforms including desktop, web and mobile for both offline and online access. This phenomenon – create once and consume everywhere model, has encouraged the dissemination and sharing of geospatial content for both planetary (whole earth) and planar 3D visualization experiences. The session will show case numerous examples (for desktop, web, and mobile experience) illustrating the many advancements made in geospatial technologies that are ripe to be embraced in various geoscience disciplines.

Lastly, we’ll close by showcasing various patterns for distributing 3D content via I3S and demonstrate its consumption thru open-source solutions such as loaders.gl and CesiumJS, supporting the latest version of I3S. We’ll also show latest additions to Tile Converter, an open-source solution that allows a two-way conversion between 3D Tiles and I3S, further fostering interoperability in the geospatial community.

How to cite: Belayneh, T.: Indexed 3D Scene Layers (I3S) – an open standard for 3D GIS visualization on Web, Desktop and Mobile Platforms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-686, https://doi.org/10.5194/egusphere-egu22-686, 2022.

EGU22-1237 | Presentations | ESSI2.9

rassta: Raster-based Spatial Stratification Algorithms 

Bryan Fuentes, Minerva Dorantes, and John Tipton

Spatial stratification of landscapes allows for the development of efficient sampling surveys, the inclusion of domain knowledge in data-driven modeling frameworks, and the production of information relating the spatial variability of response phenomena to that of landscape processes. This
work presents the rassta R package as a collection of algorithms dedicated to the spatial stratification of landscapes, the calculation of landscape correspondence metrics across geographic space, and the application of these metrics for spatial sampling and modeling of environmental phenomena.
The theoretical background of rassta is presented through references to several studies which have benefited from landscape stratification routines. The functionality of rassta is presented through code examples which are complemented with the geographic visualization of their outputs.

How to cite: Fuentes, B., Dorantes, M., and Tipton, J.: rassta: Raster-based Spatial Stratification Algorithms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1237, https://doi.org/10.5194/egusphere-egu22-1237, 2022.

EGU22-3059 | Presentations | ESSI2.9

The 'Mine-the-Gaps' geospatial web application for visualizing and evaluating regional environmental data estimates 

Ann Gledson, Douglas Lowe, Manuele Reani, David Topping, and Caroline Jay

We present the 'Mine-the-Gaps' geospatial web application and use it to present our recently published series of cleaned and imputed environmental datasets. The Mine-the-Gaps tool positions environment sensor and regional estimation data side-by-side on a map, allowing researchers to visually check sensor locations, regional coverage, and the likely accuracy of any regional approximation methods. Users can upload their own time-series geospatial data datasets, but as a proof-of-concept, we present an online version of this tool (http://minethegaps.manchester.ac.uk) pre-loaded with our recently published environmental dataset. This contains 5 years of cleaned and pre-processed UK sensor data, originally from DEFRA (air quality) and the Met Office (pollen and weather) monitoring stations alongside our basic region estimations.

Sensors can’t be installed everywhere, so many areas are sparsely covered, with some measurements (e.g. pollens and SO2) even more sparse than others. Mine-the-Gaps allows users to, for example, approximate the level of pollen for somebody reporting severe allergy symptoms in Manchester if the nearest sensor is in Chester. Two basic, distance-based estimation algorithms are pre-loaded but more importantly, we provide researchers with methods to visualize, evaluate and compare new regional estimation techniques. Estimated data can be loaded into the application via a CSV file, or new algorithms can be added to our region estimations Python package, used by Mine-the-Gaps to calculate estimations on the fly. Uploaded data is presented on a map and time-series charts can be displayed that compare sensor data with approximations for that location, had the sensor been missing.

If users have their own sensor and/or region datasets, these can be uploaded to Mine-the-Gaps, which can be cloned from our code repository. Just 2 lines of script are required to get it running locally and accessible from any browser. Time-series sensor data from any location can then be uploaded and estimations made for any regions.

We publish Mine-the-Gaps, our environmental data-sets, and associated tools with a focus on making these resources as accessible and adaptable as possible, thus allowing researchers to get on with the key job of evaluating the impact and variance of environmental data, rather than becoming bogged down in pre-processing. Importantly, the emphasis on a transparent data processing and visualization methodology enables researchers to determine the usefulness, or not, of any technique used for mapping single site measurements to represent a specific geographical region.

Potential use cases include the evaluation of current and new regional estimation methods for sensor data; evaluating the effects of variation in region shape and size when making estimations; helping to determine where a new sensor would be best positioned to fit with the existing networks; the visualization of irregularly spaced datasets; and visualizing sensor data over time.

How to cite: Gledson, A., Lowe, D., Reani, M., Topping, D., and Jay, C.: The 'Mine-the-Gaps' geospatial web application for visualizing and evaluating regional environmental data estimates, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3059, https://doi.org/10.5194/egusphere-egu22-3059, 2022.

Earth science data usually have distinct three-dimensional spatial characteristics, in which the state of the atmosphere changes rapidly, the time dimension is also very important, and the variables that describe various physical and chemical states together constitute the earth science cube. GIS, scientific computation and visualization tools are important to find and extract patterns and scientific views behind the data. MeteoInfo open-source software was developed as an integrated framework both for GIS application and scientific computation environment with two applications for end users: MeteoInfoMap and MeteoInfoLab. MeteoInfoMap makes it quick and easy to explore many kinds of geoscience data in the form of GIS layers, and includes spatial data editing, projection and analysis functions. MeteoInfoLab includes multidimensional array calculations, scientific calculations such as linear algebra, and 2D/3D plotting functions, which are suitable for completing the tasks of geoscience data analysis and visualization. The software was developed using Java and Jython, which makes it has good cross-platform capabilities and can run in operating systems such as Windows, Linux/Unix, and Mac OS with Java supporting.

The functions can be conveniently extended through development of plugin for MeteoInfoMap and toolbox for MeteoInfoLab. For example, TrajStat plugin was developed for air trajectory analysis and air pollution source identification, which has been widely used in air pollution transport pathway and spatial sources studies. Several MeteoInfoLab toolbox were also developed for model evaluation (IMEP), air pollution emission data processing (EMIPS) and machine learning (MIML). MeteoInfoLab has similar functions with Python scientific packages such as numpy, pandas and matplotlib, also Jython is just Python in Java. So, the users can learn MeteoInfoLab easily when they have Python experience, vice versa. 3D visualization functions are more powerful in MeteoInfoLab due to the usage of opengl acceleration. Also, 3D earth coordinate is supported to plot geoscience data on virtual earth.

References:

Wang, Y.Q., 2014. MeteoInfo: GIS software for meteorological data visualization and analysis. Meteorological Applications, 21: 360-368.

Wang, Y.Q., 2019. An Open Source Software Suite for Multi-Dimensional Meteorological Data Computation and Visualisation. Journal of Open Research Software, 7(1), p.21. DOI: http://doi.org/10.5334/jors.267

Wang, Y.Q., Zhang, X.Y. and Draxler, R., 2009. TrajStat: GIS-based software that uses various trajectory statistical analysis methods to identify potential sources from long-term air pollution measurement data. Environmental Modelling & Software, 24: 938-939

How to cite: Wang, Y.: MeteoInfo: An open-source GIS, scientific computation and visualization platform, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3492, https://doi.org/10.5194/egusphere-egu22-3492, 2022.

EGU22-3509 | Presentations | ESSI2.9

Simulation of slow geomorphic flows with r.avaflow 

Martin Mergili, Andreas Kellerer-Pirklbauer-Eulenstein, Christian Bauer, and Jan-Thomas Fischer

GIS-based open-source simulation tools for extremely rapid mass flow processes such as snow avalanches, rock avalanches, or debris flows are readily available, covering a broad range of complexity levels – e.g., from single-phase to multi-phase. However, these tools are not suitable for slower types of mass flows characterized by high viscosities. The conventionally used momentum balance equations for rapid flows often appear numerically unstable for high viscosities, leading to the immediate reversion of flow direction or stopping, without appropriate numerical treatment. GIS-based simulation efforts of slow geomorphic flows are reported in the literature, and open source tools are available for specific phenomena such as glaciers, but no comprehensive and readily usable simulation tools have been proposed yet.

We present a simple depth-averaged model implementation for the simulation of slow geomorphic flows, including glaciers, rock glaciers, highly viscous lava flows, and those flow-type landslides not classified as extremely or very rapid. Thereby, we use an equilibrium-of-motion concept. For each time step, flow momentum and velocity are computed as the equilibrium between accelerating gravitational forces and decelerating viscous forces, also including a simple law for basal sliding. Momentum balances are not carried over from one time step to the next, meaning that inertial forces, which are not important for slow-moving mass flows, are neglected. Whereas these basic principles are applied to all relevant processes, there is flexibility with regard to the details of model formulation and parameterization: e.g., the well-established shallow-ice approximation can be used to simulate glacier flow.

The model is implemented with the GRASS GIS-based open-source mass flow simulation framework r.avaflow and demonstrated on four case studies: an earth flow, the growth of a lava dome, a rock glacier, and a glacier (considering accumulation and ablation). All four processes were reproduced in a plausible way. However, parameterization remains a challenge due to spatio-temporal changes and temperature dependency of viscosity and basal sliding. Our model and its implementation open up new possibilities for climate change impact studies, natural hazard analysis, and environmental education.

How to cite: Mergili, M., Kellerer-Pirklbauer-Eulenstein, A., Bauer, C., and Fischer, J.-T.: Simulation of slow geomorphic flows with r.avaflow, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3509, https://doi.org/10.5194/egusphere-egu22-3509, 2022.

Providing information on the context of long-term measurements, i.e. the observation and research facilities where measurements were done, is key for re-using data generated for a defined area. This information is also needed to manage site networks of research infrastructures or research networks or to enable a link from in-situ measurements to earth observation. To cover these requirements, the Dynamic Ecological Information Management System – Site and Dataset Registry (DEIMS-SDR, https://www.deims.org/) allows the description of in-situ environmental observation or experimental sites implementing a multipurpose data model and generating persistent, unique and resolvable identifiers for each site. The aim of DEIMS-SDR is to collect site information in a catalogue describing a wide range of sites across the globe, providing information including each site's location, ecosystems, facilities, measured parameters and research themes and enabling that standardised information to be openly available. DEIMS-SDR currently stores over 1200 site records along a wide geographic, altitudinal and ecosystem gradient. To address research needs as well as to enhance interoperability and machine readability, the used site model has been revised and compared to existing de-facto standards resulting in a more modular structure.

The presentation describes the outcomes of the revision of the data model, the conceptual considerations behind it and how it is implemented. We illustrate the capabilities of the data model focussing on the description of multi-layered geographic information that is needed to accurately document sites and the various observation locations they might cover. This represents a major step forward in both the development of DEIMS-SDR as well as the interoperable and cross-disciplinary documentation of sites in general.

How to cite: Wohner, C. and Peterseil, J.: Managing multi-layered geographic information to describe environmental monitoring and research sites using DEIMS-SDR, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5451, https://doi.org/10.5194/egusphere-egu22-5451, 2022.

EGU22-5490 | Presentations | ESSI2.9

Applying water requirements into metadata in the era of SDGs and Essential Variables: semantics, quality parameters and discoverability in the GEM+ 

Ivette Serral, Joan Masó, Núria Julià, Ioannis Manakos, George Milis, and Lluís Pesquer

WQeMS aims to provide an operational Water Quality Emergency Monitoring Service to the water utilities industry in relation with the quality of the ‘water we drink’. This Copernicus service focuses on monitoring lakes for the delivery of drinking water and will provide open geospatial data products structured in Essential Water Variables. While Essential Climate variables are fully defined, a set of Essential Water Variables was proposed by GEOSS but was never fully adopted. In this communication we will present a metadata manager tool called GeM+ that adopts a general framework for Essential Variables (EV) that includes a renewed proposal for Essential Water Variables. EVs are included in a keyword library that relates them to SDG indicators. In addition, the GEM+ includes a library of quality measures defined in QualityML (that inherits and extends the UncertML approach) vocabulary that will be used and tested for the Water Quality Emergency Monitoring Service. The quality need for a vocabulary proposed by QualityML is now being adopted by the ISO 19157-3 proposal. GEM+ also implements the ISO 19115-1 approach for lineage (provenance) that allows to carefully document data product workflows used to create the geospatial products as a mechanism to describe the traceability, quality and reproducibility of the dada. Examples of water related dataset have been prepared showing how EVs, quality measures and provenance is used to semantically tag data, document quantitative quality estimations and present the sources and processes used to elaborate this Copernicus service candidate products.

 

WQeMS has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101004157.

How to cite: Serral, I., Masó, J., Julià, N., Manakos, I., Milis, G., and Pesquer, L.: Applying water requirements into metadata in the era of SDGs and Essential Variables: semantics, quality parameters and discoverability in the GEM+, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5490, https://doi.org/10.5194/egusphere-egu22-5490, 2022.

EGU22-5774 | Presentations | ESSI2.9

SciQLop: an open source project for in situ data analysis 

Alexis Jeandet, Nicolas Aunai, Vincent Génot, Alexandre Schulz, Benjamin Renard, Michotte de Welle Bayane, and Gautier Nguyen

The SCIentific Qt application for Learning from Observations of Plasmas (SciQLop) project allows to easily discover, retrieve, plot and label in situ space physic measurements from remote servers such as Coordinated Data Analysis Web (CDAWeb) or Automated Multi-Dataset Analysis (AMDA).  Analyzing data from a single instrument on a given mission can rise some technical difficulties such as finding where to get them, how to get them and sometimes how to read them.  Thus building for example a machine-learning pipeline involving multiple instruments and even multiple spacecraft missions can be very challenging. Our goal here is to remove all these technical difficulties without sacrificing performances to allow scientist to focus on data analysis.
SciQLop development has started in 2015 as a C++ graphical application funded by the Paris-Saclay Center for Data Science (CDS) then by Paris-Saclay SPACEOBS and finally it joined the Plasma Physics Data Center (CDPP) in 2019. It has evolved from  a monolithic C++ graphical application to a collection of simple and reusable Python or C++ packages solving one problem at a time, increasing our chances to reach users and contributors.

The SciQLop project is composed of the following tools:

  • Speasy: An easy to use Python package to retrieve data from remote servers with multi-layer cache support.
  • Speasy_proxy: A self-hostable, chainable remote cache for Speasy written as a simple Python package.
  • Broni: A Python package which finds intersections between spacecraft trajectories and simple shapes or physical models such as magnetosheath.
  • Orbit-viewer: A Python graphical user interface (GUI) for Broni.
  • TSCat: A Python package used as backend for catalogs of events storage.
  • TSCat-GUI: A Python graphical user interface (GUI).
  • SciQLop-GUI: An extensible and efficient user interface to visualize and label time-series with an embedded IPYthon terminal.

While some components are production ready and already used for science, SciQLop is still in development and the landscape is moving quite fast.
In this presentation we will give an overview of SciQLop, demonstrate its benefits using some specific cases studies and finally discuss the planned features development.

How to cite: Jeandet, A., Aunai, N., Génot, V., Schulz, A., Renard, B., Bayane, M. D. W., and Nguyen, G.: SciQLop: an open source project for in situ data analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5774, https://doi.org/10.5194/egusphere-egu22-5774, 2022.

EGU22-5940 | Presentations | ESSI2.9

Approaches for Enabling Interoperable Enterprise Data Search: Insights from NASA’s Science Mission Directorate (SMD) Catalog Project 

Kaylin Bugbee, Rahul Ramachandran, Ashish Acharya, Dai-Hai Ton That, John Hedman, David Bitner, Ahmed Eleish, Charles Driessnack, Wesley Adams, and Emily Foshee

NASA’s Science Mission Directorate (SMD) is working to build an open-source science infrastructure to enable open, collaborative and interdisciplinary science. One key component in the open-source science infrastructure is the SMD data catalog project. The SMD data catalog project is building an integrated SMD enterprise search capability to enable discovery of open data across SMD’s five divisions, including Astrophysics, Biological and Physical Sciences, Earth Science, Heliophysics and Planetary Science. In order to efficiently integrate heterogeneous data types across the SMD enterprise, the SMD data catalog project considered three approaches for implementation including semantic mapping, graph data stores and Cognitive Search capabilities. In this presentation, we will describe the SMD data catalog project business case, the three identified technical approaches and our assessment of those approaches. We will also provide a summary of the overall technical assessment process so that it may inform other agencies and organizations who may be implementing similar initiatives. 

How to cite: Bugbee, K., Ramachandran, R., Acharya, A., Ton That, D.-H., Hedman, J., Bitner, D., Eleish, A., Driessnack, C., Adams, W., and Foshee, E.: Approaches for Enabling Interoperable Enterprise Data Search: Insights from NASA’s Science Mission Directorate (SMD) Catalog Project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5940, https://doi.org/10.5194/egusphere-egu22-5940, 2022.

EGU22-8671 | Presentations | ESSI2.9

Virtual Laser Scanning using HELIOS++ - Applications in Machine Learning and Forestry 

Lukas Winiwarter, Alberto Manuel Esmorís Pena, Vivien Zahs, Hannah Weiser, Mark Searle, Katharina Anders, and Bernhard Höfle

Virtual Laser Scanning (VLS) provides a remote sensing method to generate 3D point clouds, which can, in certain cases, replace real data acquisition. A prerequisite is a suitable substitute of reality for modelling the 3D scene, the scanning system, the platform, the laser beam transmission, the beam-scene interaction, and the echo detection. The suitability of simulated laser scanning data largely depends on the application, and simulations that are more realistic come with stricter requirements on input data quality and higher computational costs. It is therefore important to have a good capability for corresponding trade-offs in the simulation software.

With the scientific software HELIOS++ [1], we provide an open source solution to acquire VLS data, where this trade-off can be tuned easily. HELIOS++ is implemented in C++ for optimized runtimes, and provides bindings in Python to allow integration into scripting environments (e.g. GIS plugins, Jupyter Notebooks).

The HELIOS++ VLS concept is based on a modular design. This allows the user to quickly exchange single simulation components, such as the scanner or the 3D scene. The simulation of diverging laser beams and the recording of full waveforms is supported via a subray tracing approach: depending on the desired physical realism and accuracy, a user-defined number of concentric circles approximate a single laser beam. On each circle, individual subrays are cast into the scene, which can then intersect with a single object and produce a hit. The returned waveforms are subsequently added together. This allows the simulator to detect multiple echoes for each pulse. The waveforms can be exported for further analysis.

In this contribution, we present main applications of HELIOS++ as a general-purpose LiDAR simulator. The first application is forestry, where green vegetation can be represented by different 3D model types. As the simulation of individual leaves as 3D mesh models requires high computational power, voxel-based methods have recently been proposed. HELIOS++ also supports simulation of semitransparent voxels, where a subray has a certain probability of creating a return when traversing. This probability depends on the incidence angle and the leaf angle distribution (e.g., planophile, erectophile …), the traversal length through the voxel, and the leaf area density of the voxel, which can, e.g., be derived from a terrestrial laser scanning point cloud. Tuning of the subray-parameters allows recreating vertical point density profiles of real surveys.

A second use case is the generation of training data for machine learning algorithms. Recently, several methods for Deep Learning on point clouds have been presented. However, such methods require immense amounts of training data to achieve acceptable performance. We present how VLS can be used to generate training data in machine learning classifiers, and how different sensor settings influence the classification results.

This contribution provides an introduction to VLS, possible use cases, pitfalls and best practices for successful application of laser scanning simulation.

[1] Winiwarter et al., 2021, DOI: 10.1016/j.rse.2021.112772

How to cite: Winiwarter, L., Esmorís Pena, A. M., Zahs, V., Weiser, H., Searle, M., Anders, K., and Höfle, B.: Virtual Laser Scanning using HELIOS++ - Applications in Machine Learning and Forestry, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8671, https://doi.org/10.5194/egusphere-egu22-8671, 2022.

EGU22-9454 | Presentations | ESSI2.9

GEE and Machine Learning for mapping burnt areas from ESA’s Sentinel-2 demonstrated in a Greek setting 

Ioanna Tselka, Spyridon E. Detsikas, Isidora Isis Demertzi, George P. Petropoulos, Dimitris Triantakonstantis, and Efthimios Karymbalis

Climate change has resulted to an increase in the occurrence and frequency of natural disasters worldwide. An increased concern today is wildfire incidents, which constitute one of the greatest problems due to the ecological, economical and social impacts. Thus, it is very important to obtain accurately and robustly information on burned area cartography. Recent advances in the field of geoinformation have allowed the development of cloud- based platforms for EO data processing such as Google Earth Engine (GEE). The latter allows rapid processing of large amount of data in an efficient way, saving costs and time since there is also no need to locally download and process the EO datasets in specialized software packages committing also own computing resources. In the present study, a GEE-based approach that exploits machine learning (ML) techniques is developed with the purpose of automating the mapping of burnt areas from ESA’s Sentinel-2 imagery. To demonstrate the developed scheme, as a case study is used one of the largest wildfire events occurred in the summer of 2021 in the outskirts of Athens, Greece. A Sentinel-2 image, obtained from GEE immediately after the fire event, was combined with ML classifiers for the purpose of mapping the burnt area at the fire-affected site. Accuracy  assessment was conducted on the basis of both the error matrix approach and the Copernicus Rapid Mapping operational product specific to this fire event. All the geospatial analysis was conducted in a GIS environment. Our results evidenced the ability of the synergistic use of Sentinel-2 imagery with ML to map accurately and robustly the burnt area in the studied region. This information can provide valuable help towards prioritization of activities relevant to the rehabilitation of the fire-affected areas and post fire management activities. Last but not least, this study provides further evidence of the unique advantages of GEE towards a potential an automation of burnt area delineation over large scales.

KEYWORDS: GEE, Machine Learning, Sentinel-2, Burnt area mapping, Copernicus

How to cite: Tselka, I., Detsikas, S. E., Demertzi, I. I., Petropoulos, G. P., Triantakonstantis, D., and Karymbalis, E.: GEE and Machine Learning for mapping burnt areas from ESA’s Sentinel-2 demonstrated in a Greek setting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9454, https://doi.org/10.5194/egusphere-egu22-9454, 2022.

EGU22-11486 | Presentations | ESSI2.9

Geopropy: An open source tool to generate 3D geological cross sections 

Ashkan Hassanzadeh, Enric Vázquez-Suñé, Mercè Corbella, and Rotman Criollo

Cross sections play a significant role in environmental and geological studies. In general, underground models  can be generated by experienced geologists or derived from mathematical-based geological data interpolation. In this work we present Geopropy, a hybrid knowledge-data driven tool that mimics the straightforward stages that a geologist follows to generate a geological cross-section, taking into account the available data but without using complex mathematical algorithms of interpolation. Geopropy separates the areas with one possible geological outcome from those with multiple possible geological scenarios based on the given hard data. The algorithm creates the cross section in the simple areas and in order to reach a unique outcome, the user is asked for decisions or more hard data in semi-automatic and manual stages based on the complexity of the cross section. The outputs are 3D shapefiles that are again checked  with the introduced hard data to avoid inconsistencies or possible personal biases. Geopropy is therefore an open source Python library support tool for geologists in explicit modelling that aims to reach simple, consistent and fast results.

How to cite: Hassanzadeh, A., Vázquez-Suñé, E., Corbella, M., and Criollo, R.: Geopropy: An open source tool to generate 3D geological cross sections, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11486, https://doi.org/10.5194/egusphere-egu22-11486, 2022.

EGU22-11722 | Presentations | ESSI2.9

Monitoring morphological changes in river deltas exploiting GEE and the full Landsat archive 

Isidora Isis Demertzi, Spyridon E. Detsikas, Ioanna Tselka, Ioanna Tzanavari, Dimitris Triantakonstantis, Efthimios Karymbalis, and George P. Petropoulos

River deltas are considered among the most diverse ecosystems with significant environmental and agricultural importance. These landscapes are vulnerable to any human activity or natural process that can disturb the fragile balance between water and land, thus causing morphological changes in the delta fronts. Earth observation (EO), with its capability to provide systematic, inter-temporal and cost-effective data provides a promising potential in monitoring the dynamic changes of river deltas. Recent advances in geoinformation technologies have allowed the development of cloud-based platforms for EO processing such as Google Earth Engine (GEE). It offers unique advantages such as rapid processing of a large amount of data in a cost and time-efficient manner. This study aims to assess the added value of GEE in monitoring the coastal surface area of river deltas based on the full Landsat archive (TM, ETM+, OLI, L9) and a machine learning (ML) technique. As a case study two river deltas, Axios & Aliakmonas, were selected located in northern Greece. Those are two of the largest rivers of the country, with Axios being also the second largest in the Balkans. Their joint river deltas create a fertile valley with great environmental and agricultural importance, which has also exhibited very strong dynamics in terms of its morphological characteristics over the last decades. In order to gain a better insight into the coastal dynamics of the studied region, Landsat multi-spectral data covering the period 1984 - present time was integrated into GEE and a Machine Learning (ML) classification approach was developed in the cloud-based environment. The two rivers delta dynamics were also mapped independently using photo interpretation serving as our reference dataset to map the river delta dynamics, in accordance to other studies. All the geospatial data analysis of the extracted morphological features of the river deltas was conducted in a geographical information system (GIS) environment. Our results evidenced the unique advantages of cloud platforms such as GEE, towards the operationalization of the investigated approaches for coastal morphological changes such as those found in the studied river deltas. Unique characteristics of the proposed herein methodology consist of the exploitation of the cloud-based platform GEE together with the advanced ML image processing algorithm and the full utilization of the Landsat images available today. The proposed approach also can be fully-automated and is transferable to other similar areas and can prove valuable help in understanding the spatiotemporal changes in coastal surface area over large areas.

KEYWORDS: Google Earth Engine, Landsat, Machine Learning, Earth Observation, river delta

How to cite: Demertzi, I. I., Detsikas, S. E., Tselka, I., Tzanavari, I., Triantakonstantis, D., Karymbalis, E., and Petropoulos, G. P.: Monitoring morphological changes in river deltas exploiting GEE and the full Landsat archive, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11722, https://doi.org/10.5194/egusphere-egu22-11722, 2022.

EGU22-12167 | Presentations | ESSI2.9

Brokering approach based implementation for the national hydrological and meteorological information system in Italy 

Enrico Boldrini, Roberto Roncella, Fabrizio Papeschi, Paolo Mazzetti, Marco Casaioli, Stefano Mariani, Martina Bussettini, Barbara Lastoria, and Silvano Pecora

The Italian national hydrological and meteorological information system is being operationally implemented by Regional Hydrological Services (SIR), under the coordination of the Italian Institute for Environmental Protection and Research (ISPRA) and the collaboration of National Research Council of Italy, Institute of Atmospheric Pollution Research (CNR-IIA) for architectural design and implementation of the central components and the National Institute for Nuclear Physics (INFN) as cloud computing service developer and infrastructure provider. This work is funded by the Italian Ministry of Ecological Transition under the national initiative “Piano Operativo Ambiente FSC 2014-2020” with the aim of providing a standardised and uniform access to hydro-meteorological data of Italy, allowing, among the others, calculating statistics, trends and indicators related to the hydrological cycle, weather and climate and water resources at national and sub-national (e.g., river basin districts, catchments, climatic areas) scales. A prototype of the system has been developed in the framework of the Italian National Board for Hydrological Operational Services, coordinated by ISPRA, which federates the Italian SIRs that are responsible for hydro-meteorological monitoring at local level. 

A hydrometeorological web portal will be the entry point for end users such as Institutional bodies, research institutions and universities to discover, access and download hydrological and meteorological data of Italy made available by the SIR services.  

Each SIR is already publishing online hydrological and meteorological regional data by means of Internet services. As a requirement, no obligation can be imposed on the specific communication protocols (and related data models) that will be implemented by such services, as the entry barrier for SIR to participate in the system of systems should be minimal.

CNR-IIA is responsible for the design of the architecture, which will be based on a brokering approach to enable interoperability amongst the heterogeneous systems. CNR-IIA is also responsible for the implementation of both the brokering component, based on the Discovery and Access Broker (DAB) technology and the hydrometeorological web portal.

The DAB is a software framework able to implement discovery and access functionalities across a distributed set of heterogeneous data publication systems, in a transparent way for the end user, by acting as a mediator of communication protocols and related data models.

Other service interfaces published by the brokering component will be used by different end-user tools and applications, enabling as well sharing of hydrological and meteorological data of Italy towards different national and international initiatives, in particular the WMO Hydrological Observing System (WHOS).

The brokering component will be deployed and managed operatively on a cloud infrastructure to optimize overall system performance and resource usage.

The system will be initially hosted on the CNR-IIA cloud infrastructure backed by Amazon Web Services (AWS), while the target hosting infrastructure and the related cloud computing services, to be ready for operative use by the end of 2025, will be provided by INFN.

How to cite: Boldrini, E., Roncella, R., Papeschi, F., Mazzetti, P., Casaioli, M., Mariani, S., Bussettini, M., Lastoria, B., and Pecora, S.: Brokering approach based implementation for the national hydrological and meteorological information system in Italy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12167, https://doi.org/10.5194/egusphere-egu22-12167, 2022.

EGU22-12330 | Presentations | ESSI2.9

Towards interoperable metadata description for imagery data of our Earth and other planets: Connection between ocean floor and planetary surfaces 

Andrea Naß, Timm Schoening, Karl Heger, Autun Purser, Mario D'Amore, Ernst Hauber, Tom Kwasnitschka, Robert Munteanu, and Thomas Roatsch

Imaging the environment is an essential method in spatial science when studying the Earth or any other planet. Thus, this method is also a crucial component in the exploration of the ocean floor but also of planetary surfaces. In both domains, this is applied at various scales – from microscopy through ambient imaging to remote sensing – and provides rich information for science.

Due to recent the increasing number data acquisition technologies, advances in imaging capabilities, and number of platforms that provide imagery and related research data, data volume in nature science, and thus also for ocean and planetary research, is further increasing at an exponential rate. Although many datasets have already been collected and analyzed, the systematic, comparable, and transferable description of research data through metadata is still a big challenge in and for both fields. However, these descriptive elements are crucial, to enable efficient (re)use of valuable research data, prepare the scientific domains e.g. for data analytical tasks such as machine learning, big data analytics, but also to improve interdisciplinary science by other research groups not involved directly with the data collection.

In order to achieve more effectiveness and efficiency in managing, interpreting, reusing and publishing imaging data, we here present a project to develop interoperable metadata recommendations in the form of FAIR [1] digital objects (FDOs) [2] for 5D (i.e. x, y, z, time, spatial reference) imagery of Earth and other planet(s). An FDO is a human and machine-readable file format for an entire image set, although it does not contain the actual image data, only references to it through persistent identifiers (FAIR marine images [3]). Thus, the FDOs for spatial sciences are characterized at their core by 5D navigation data mentioned above which discriminates them from imagery of other domains (e.g., medical). In addition to these core metadata, further descriptive elements are required to describe and quantify the semantic content of imaging research data. Such semantic FDOs are similarly domain-specific but again synergies are expected between Earth and planetary research. Subsequent, by developing ontology concepts for these two imaging domains, scientific analogies and causal connections between the two research domains can be illuminated.

The main benefit expected by this project is to (1) improve the quality and reusability of future research data, (2) support a sustainable research data environment by closing the life cycle of the research data, (3) increase the inter- and transdisciplinary comparability of data sets, and (4) enable further scientific communities in transference of their own vocabularies, and in the use of the ontology concepts within other natural science applications.

We here present the current status of the project, with the specific tasks on joint metadata description of planetary and oceanic data outlined. In particular we show how we intend to implement metadata for valuable research data in both domains in the future, and demonstrate where these developments should be adopted.

[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016), doi:10.1038/sdata.2016.18.

[2] https://fairdigitalobjectframework.org/

[3] https://marine-imaging.com/fair/ifdos/iFDO-overview/

How to cite: Naß, A., Schoening, T., Heger, K., Purser, A., D'Amore, M., Hauber, E., Kwasnitschka, T., Munteanu, R., and Roatsch, T.: Towards interoperable metadata description for imagery data of our Earth and other planets: Connection between ocean floor and planetary surfaces, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12330, https://doi.org/10.5194/egusphere-egu22-12330, 2022.

EGU22-13235 | Presentations | ESSI2.9

Automated platform for detecting and mapping crop diseases using UAV and Artificial Intelligence 

Alexandru-Lucian Vilcea, Marian Dardala, and Ionut-Cosmin Sandric

With the increase of the world’s population, the constant growth in food demand generates the need to find new ways of optimizing agricultural workflows. Today, a wide variety of software is dedicated to precision agriculture that helps the farmers gather data otherwise hardly available and extract information to minimize crop losses or prevent diseases. However, these solutions hardly allow a complete workflow from gathering the images, processing the datasets, and mapping and detecting the crop diseases. The solutions can be precisely applied where needed, without the need of manually exchanging information between applications. In this paper, we are proposing an architecture as well as a possible implementation for a web platform that can manage such workflows. The platform was implemented using ASP.NET Core 3.1 with C# as the main programming language. Following the best practices in terms of maintainability, the integration with third-party software was developed using proxy components that implement each components’ SDK or API, making these external solutions easily interchangeable. The first module presented in the paper covers the integration of third-party UAV controlling platforms. We integrated the UgCS commercial solution using the provided .NET SDK for our scenario. The data gathered by the UAVs controlled by this module, consisting of RGB, thermal and multispectral images, were stored using Azure Blob Storage cloud service. The location of each image data was acquired by extracting the XMP metadata and further stored using a PostgreSQL database with the PostGIS extension. Next, we provided a way to automatically generate the orthophoto imagery from the acquired data by integrating the Python API available in Agisoft Metashape. Lastly, using the products obtained from the previous step, we calculated different vegetation indices of the analyzed fields using C# and analyzed the outcomes using deep-learning models to identify and map vegetation health states. The platform has been implemented and tested for several case studies located across Romania, reaching satisfactory results.

How to cite: Vilcea, A.-L., Dardala, M., and Sandric, I.-C.: Automated platform for detecting and mapping crop diseases using UAV and Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13235, https://doi.org/10.5194/egusphere-egu22-13235, 2022.

ESSI3 – Open Science Informatics for Earth and Space Sciences

EGU22-4996 | Presentations | ESSI3.1

Why we need to highlight FAIR and Open Data and how to do it with EASYDAB 

Anette Ganske, Angelika Heil, Hannes Thiemann, and Andrea Lammert

The FAIR1 data principles are important for the findability, accessibility, interoperability, and reusability of data. Therefore, many repositories make huge efforts to curate data so that they become FAIR and assign DataCite2 DOIs to archived data for increasing the findability. Nevertheless, recent investigations (Strecker3, 2021) show that many datasets published with a DataCite DOI don’t meet all aspects of the FAIR principles, as they are missing important information for the reuse and interoperability in their metadata. Further examinations of data from the Earth System Sciences (ESS) reveal that especially automatic processability is suboptimal, e.g. because of missing persistent identifiers for creators and affiliations, missing information about geolocations or time ranges of simulations. As well, many datasets either don’t have any licence information or a non-open licence.

The question arises of how datasets with open licences4 and high-quality metadata can be highlighted so that they stand out from the crowd of published data. One solution is the newly developed branding for FAIR and open ESS data, called EASYDAB5 (Earth System Data Branding).  It consists of a logo that earmarks landing pages of those datasets with a DataCite DOI, an open licence, open file formats6, rich metadata, and which were quality controlled by the responsible repository. The EASYDAB logo is protected and may only be used by repositories that agree to follow the EASYDAB Guidelines7. These guidelines define principles on how to achieve high metadata quality of ESS datasets by demanding specific metadata information. Domain-specific quality guidelines define the mandatory metadata for a self-explaining description of the data. One example is a quality guideline for atmospheric model data - the ATMODAT Standard8. It prescribes not only the metadata for the files but also for the DOI and the landing page. The atmodat data checker9 additionally helps data providers and repositories to check whether data files meet the requirements of the ATMODAT Standard.

The use of the EASYDAB logo is free of charge, but repositories must sign a contract with TIB – Leibniz Information Centre for Science and Technology10. TIB will control in the future that datasets with landing pages highlighted with EASYDAB indeed follow the EASYDAB Guidelines to ensure that EASYDAB remains a branding for high-quality data. 

Using EASYDAB, repositories can indicate their efforts to publish data with high-quality metadata. The EASYDAB logo also indicates to data users that the dataset is quality controlled and can be easily reused.

 

1: https://www.go-fair.org/fair-principles/

2: https://datacite.org/

3: https://edoc.hu-berlin.de/bitstream/handle/18452/23590/BHR470_Strecker.pdf?sequence=1

4: https://opendefinition.org/licenses/

5: https://www.easydab.de

6: http://opendatahandbook.org/guide/en/appendices/file-formats/

7: https://cera-www.dkrz.de/WDCC/ui/cerasearch/entry?acronym=EASYDAB_Guideline_v1.1

8: https://doi.org/10.35095/WDCC/atmodat_standard_en_v3_0

9: https://github.com/AtMoDat/atmodat_data_checker

10: https://www.tib.eu/en/

How to cite: Ganske, A., Heil, A., Thiemann, H., and Lammert, A.: Why we need to highlight FAIR and Open Data and how to do it with EASYDAB, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4996, https://doi.org/10.5194/egusphere-egu22-4996, 2022.

EGU22-5985 | Presentations | ESSI3.1

The new emerging DOI biotope within and around the Open Source Geospatial Foundation (OSGeo) 

Peter Löwe, Maris Nartišs, Jeff McKenna, and Astrid Emde

We report on the adoption of persistent identifiers by community-driven geospatial open source communities and their umbrella organisation OSGeo. After a ramp up process, which included the introduction and evaluation of Digital Object Identifiers (DOI) for OSGeo conference recordings, a growing number of OSGeo project communities have started to adopt DOI for their respective code bases (e.g. MOSS GIS https://doi.org/10.5281/zenodo.5825144, GRASS GIS https://doi.org/10.5281/zenodo.5810537), enabling scientific reference and citation for distinct versions of code and also the overall software project code base. In addition, the use of persistent IDs for persons is accelerating. This presentation provides an overview over the latest state of DOI use by projects, lessons learned, emerging challenges (e.g. DOI-based reference for software versions predating the DOI minting) and emerging opportunities for the OSGeo communities (e.g. integration of DOI for code, presentations, manuals and video) and beyond.

How to cite: Löwe, P., Nartišs, M., McKenna, J., and Emde, A.: The new emerging DOI biotope within and around the Open Source Geospatial Foundation (OSGeo), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5985, https://doi.org/10.5194/egusphere-egu22-5985, 2022.

EGU22-6573 | Presentations | ESSI3.1

DAS Data Management Challenges and Needs 

Rob Mellors and Chad Trabant and the DAS RCN Data Management Working Group

Distributed Acoustic Sensing (DAS) is a relatively recent technology that has the capability to collect seismic (and other data) time series data using optical fiber as sensors. These optical fibers may be custom deployments or re-purposed telecommunication fibers. The range of applications is increasing rapidly, and recent studies include subsurface monitoring, earthquake hazard, geotechnical engineering, and ice flow. As the number of uses and studies increase, it is expected that the need for archiving of the datasets will also increase. Archiving of DAS faces multiple challenges at present. These include the need for large amounts (100’s TB) of storage, associated data transport and processing, and a standardized metadata format. As part of the DAS Research Coordination Network (RCN), a DAS data management working group is constructing a metadata model for DAS data that will address these needs. The objective is to develop a common metadata standard for archival purposes and guide data collection at experiments. The metadata requirements include: 1) accommodation of most use cases (data collection scenarios); 2) permitting of cloud-based processing; 3) allowing of pre-processing; and 4) reduction of the burden of data transport. Standard metadata principles, such as findability, accessibility, interoperability, reusability (FAIR), and machine-readability, will be adhered to. The purpose of this presentation is to inform potential users of these efforts, encourage adoption of the proposed standard, and invite community input.

How to cite: Mellors, R. and Trabant, C. and the DAS RCN Data Management Working Group: DAS Data Management Challenges and Needs, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6573, https://doi.org/10.5194/egusphere-egu22-6573, 2022.

EGU22-7291 | Presentations | ESSI3.1

British Antarctic Survey’s Aerogeophysics Data: Releasing 25 Years of Gravity, Magnetics, and Radar Datasets over Antarctica 

Alice Fremand, Julien Bodart, Tom Jordan, Fausto Ferraccioli, Carl Robinson, Hugh Corr, Helen Peat, Robert Bingham, and David Vaughan

Over the past 50 years, the British Antarctic Survey (BAS) has been one of the major acquisitors of airborne geophysical data over Antarctica, providing scientists with gravity, magnetics and radar datasets that have been central to many studies of the past, present, and future evolution of the Antarctic Ice Sheet. Until recently, many of these datasets were unpublished in full, restricting the further usage of the data for different glaciological and geophysical applications. Starting in 2020, scientists and data managers at the British Antarctic Survey have worked on standardising and releasing large swaths of aerogeophysical data acquired during the period 1994-2020, including a total of 64 datasets from 24 different surveys, amounting to ~450,000 line-km (or 5.3 million km2) of data across West Antarctica, East Antarctica, and the Antarctic Peninsula. Amongst these are the extensive surveys over the fast-changing Pine Island (2004-05) and Thwaites (2018-20) glacier catchments amongst others. Considerable effort has been made to standardise these datasets to comply with the FAIR (Findable, Accessible, Interoperable and Re-Usable) data principles, as well as to create a new Polar Airborne Geophysics Data Portal (https://www.bas.ac.uk/project/nagdp/), which serves as a user-friendly interface to interact and download the newly published data. Here, we review how these datasets were acquired and processed, and present the methods used to standardise them. We then discuss the new data portal infrastructure and interactive tutorials that were created to improve the accessibility of the data. We believe that this newly released data will be a valuable asset to future geophysical and glaciological studies over Antarctica and extend significantly the life cycle of the data. All datasets included in this data release are now fully accessible at the UK Polar Data Centre, now certified by the CoreTrustSeal: https://data.bas.ac.uk

How to cite: Fremand, A., Bodart, J., Jordan, T., Ferraccioli, F., Robinson, C., Corr, H., Peat, H., Bingham, R., and Vaughan, D.: British Antarctic Survey’s Aerogeophysics Data: Releasing 25 Years of Gravity, Magnetics, and Radar Datasets over Antarctica, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7291, https://doi.org/10.5194/egusphere-egu22-7291, 2022.

EGU22-8427 | Presentations | ESSI3.1

How to publish your data with the EPOS Multi-scale Laboratories data publication chain 

Geertje ter Maat and Richard Wessels and the the EPOS TCS Multi-scale Laboratories Team

The Multi-scale Laboratories (MSL) are a network of European laboratories bringing together the scientific fields of analogue modeling, paleomagnetism, rock and melt physics, geochemistry and microscopy. MSL is one of nine Thematic Core Services (TCS) of the European Plate Observing System (EPOS) (https://www.epos-eu.org/). The overarching goal of EPOS is to establish a comprehensive multidisciplinary research platform for the Earth sciences in Europe. It aims at facilitating the integrated use of data, models, and facilities, from both existing and new distributed pan-European Research Infrastructures, allowing open access and transparent (re-)use of data.

Laboratory facilities are an integral part of Earth science research. The diversity of methods employed in such infrastructures reflects the multi-scale nature of the Earth system and is essential for understanding its evolution, assessing geo-hazards, and sustainably exploiting geo-resources.

Experimental data from these laboratories provide the backbone for many scientific publications, but are often available only on request from the author, as supplementary information to research articles or in a non-digital form (printed tables, figures), limiting data re-use, re-interpretation and availability. Moreover, the raw data remains often unpublished, inaccessible, and unpreserved for the long term.

The TCS MSL is committed to making Earth science laboratory data Findable, Accessible, Interoperable, and Reusable (FAIR). For this purpose, the TCS MSL encourages the community to share their data via DOI-referenced, citable data publications. To facilitate this and ensure the provision of rich metadata, we offer user-friendly tools, plus the necessary data management expertise, to support all aspects of data publishing for the benefit of individual lab researchers via partner repositories. Data published via TCS MSL are described with the use of sustainable metadata standards enriched with controlled vocabularies used in geosciences. The resulting data publications are also exposed through a designated TCS MSL online portal that brings together DOI-referenced data publications from partner research data repositories (https://epos-msl.uu.nl/). As such, successful efforts have already been made to interconnect new data (metadata exchange) with existing databases such as MagIC (paleomagnetic data in Earthref.org) and, in the future, we expect to broaden and improve this practice with other repositories. 

How to cite: ter Maat, G. and Wessels, R. and the the EPOS TCS Multi-scale Laboratories Team: How to publish your data with the EPOS Multi-scale Laboratories data publication chain, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8427, https://doi.org/10.5194/egusphere-egu22-8427, 2022.

EGU22-9964 | Presentations | ESSI3.1

Citing large numbers of diverse datasets 

James Ayliffe, Martina Stockhause, Shelley Stall, Deb Agarwal, Justin Buck, Caroline Coward, and Chris Erdmann

In Earth and Biological sciences, data are often preserved and publicly available in data repositories where the data are citable by DOIs and published under a Creative Commons CC-BY license. Researchers combine many datasets across disciplines, repositories, and regions to better understand processes, patterns, and drivers. Citing these many datasets is difficult as the large number does not fit into the references section of a paper but the licenses of the datasets require that credit is given to their creators.

 

The Data Citation Community of Practice (CoP) was formed to target such challenges in data citation and other scholarly work that will support indexing and measuring the impact. The CoP identified a container as a solution for large numbers of data citations that holds the citations and its internal format, which is referred to as a 'reliquary'. The existing dataset collection methods have been gathered and evaluated using concrete citation use cases. Requirements for the reliquary content have been identified and applied to the use cases. In this presentation, we will report on the current progress on an approach to building a reliquary.

 

Reliquaries are an important part of enabling cross-disciplinary analysis of large amounts of data stored in many repositories. The challenge with a reliquary will be to design a method that works across diverse repositories and domain citation practices and to enhance the indexing system to direct credit to the reliquary content and authors. The CoP is in the process of setting up a Research Data Alliance (RDA) Working Group on Complex Citations in the Earth, Space, and Environmental Sciences to broaden the discussion and to find further use cases for evaluation and interested early adopters.

How to cite: Ayliffe, J., Stockhause, M., Stall, S., Agarwal, D., Buck, J., Coward, C., and Erdmann, C.: Citing large numbers of diverse datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9964, https://doi.org/10.5194/egusphere-egu22-9964, 2022.

EGU22-10321 | Presentations | ESSI3.1

Findability of laboratory data in the solid Earth sciences: a portal for cross-disciplinary metadata 

Laurens Samshuijzen, Otto Lange, Ronald Pijnenburg, Kirsten Elger, Richard Wessels, Geertje ter Maat, Simone Frenzel, and Martyn Drury

The Thematic Core Service Multi-scale Laboratories (TCS MSL) is a community within the European Plate Observing System (EPOS) that includes a wide range of world-class laboratory infrastructures and that provides a cross-disciplinary, though coherent platform for virtual access to data and physical access to solid Earth science labs. The data produced at the participating laboratories are crucial to serving society’s need for geo-resources exploration and for protection against geo-hazards. To model resource formation and system behaviour during exploitation, researchers need an understanding from the molecular to the continental scale, based on experimental and analytical data.

Data coming from the MSL laboratories provide the backbone for scientific publications, but they are often available only as supplementary information to research articles. Moreover, the vast majority of  the collected data remain unpublished, inaccessible, and often not sustainably preserved for the long term. To allow reuse of these valuable but often neglected data, the TCS MSL developed a full publication chain to support their FAIR dissemination and sustainable accessibility. This chain consists of a community-driven metadata standard that allows multiple discipline-specific detailed descriptions, a publication tool (metadata editor), and an online community portal that gives access to DOI-referenced data publications at multiple research data repositories related to the TCS MSL context (https://epos-msl.uu.nl/). The portal is built on the CKAN repository toolkit and is driven by the richness of the TCS MSL metadata standard. Besides its importance for the TCS MSL community, it also provides a showcase of how to set up the CKAN environment as a cross-disciplinary catalogue for FAIR metadata exchange through a cascade of infrastructures.

How to cite: Samshuijzen, L., Lange, O., Pijnenburg, R., Elger, K., Wessels, R., ter Maat, G., Frenzel, S., and Drury, M.: Findability of laboratory data in the solid Earth sciences: a portal for cross-disciplinary metadata, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10321, https://doi.org/10.5194/egusphere-egu22-10321, 2022.

EGU22-10546 | Presentations | ESSI3.1

Accelerating Open and FAIR Data Practices Across the Earth, Space, and Environmental Sciences 

Christopher Erdmann and Shelley Stall

Data underlying published studies is difficult to find or access, which can hinder new scientific research. Currently, only about 20% of published papers have their supporting data in discoverable and accessible repositories. The AGU, working with our partners (Dryad, CHORUS, ESIP, Wiley), and supported by the National Science Foundation (NSF), will focus on improving guidance and workflows to properly manage, link, and track data and software references throughout the publication pipeline. The resulting best practices will serve as a resource for AGU editors, reviewers, and authors and help advance data and software publication policies. Beyond AGU, this work will serve as a model for linking information across funders, data repositories, and publishers, and improving public access to research outputs. In this talk, current publication practices as they relate to the FAIR principles will be described, together with lessons learned, and how workflows and guidance are being improved.

How to cite: Erdmann, C. and Stall, S.: Accelerating Open and FAIR Data Practices Across the Earth, Space, and Environmental Sciences, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10546, https://doi.org/10.5194/egusphere-egu22-10546, 2022.

As a researcher, having access to well-documented research datasets and software relevant to your work can vary in difficulty based on your discipline and other factors. When it works well, you benefit from the ability to easily analyze and perhaps use those data and software.  When it works poorly, you are sending emails to get access to datasets, asking for more information, hoping you will get responses, and then maybe trusting that you understand the data or software well enough to integrate, rework, or explore further. 

How do we create more of the “beneficial” experience?   How do we create a culture where having better tools, practices, and methods helps us achieve this goal?  Well, it takes deliberate intent and patience in taking those initial first steps.  Many of the difficult scientific questions still in front of us require access to more usable data and software, in easier ways, enabling us to “see” the complex systems of the universe better. 

In this talk, we will share the work happening in AGU, their collaborators, and the broader community to take those initial steps, and support the culture of the future.

How to cite: Stall, S. and Erdmann, C.: The Culture of Open Science: Building an Ethos that Feels like Home across the Earth, Space and Environmental Sciences -- One Step at a Time, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10800, https://doi.org/10.5194/egusphere-egu22-10800, 2022.

EGU22-11046 | Presentations | ESSI3.1

How to Prepare Atmospheric Model Data for Publication with the ATMODAT Standard 

Angelika Heil, Andrea Lammert, and Anette Ganske

Atmospheric Models are a relevant element of Climate Research. Access to this atmospheric model data is not only of interest to a wide scientific community but also to public services, companies, politicians and citizens. The state-of-the-art approach to make the data available is to publish them via a data repository that adheres to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles(1). A FAIR publication of research data implies that the data are described with rich metadata and that the data and metadata meet relevant discipline-specific standards. 

A very comprehensive data standard has been developed for the climate model output within the Climate Model Intercomparison Project (CMIP)(2), which largely builds upon the Climate and Forecast Metadata Conventions (CF Conventions)(3). Nevertheless, there are many areas of atmospheric modelling, where data standardisation according to the CMIP standard is not possible or very difficult. To facilitate this task, the ATMODAT standard(4), a quality guideline for the FAIR and open publication of atmospheric model data, was recently established. 

The ATMODAT standard defines a set of requirements that aim at ensuring a high degree of reusability of published atmospheric model data. The requirements include the use of the netCDF file format(5), the application of the CF conventions(3), a data publication with a DataCite DOI(6), and rich and standardised metadata for the data files, the DOI and on the landing page. 

The atmodat data checker(7) was developed to support data producers in checking if their file metadata comply with the ATMODAT standard. 

We demonstrate the application of the ATMODAT standard to selected datasets from a building-resolving atmospheric model, the "grassroots" AEROCOM MIP, and weather pattern data derived from an atmospheric reanalysis. We explain the practical workflow involved to achieve an ATMODAT-compliant data publication and discuss the various challenges.

 

(1) https://doi.org/10.1038/sdata.2016.18
(2) https://doi.org/10.5194/gmd-13-201-2020
(3) https://cfconventions.org/
(4) https://doi.org/10.35095/WDCC/atmodat_standard_en_v3_0
(5) https://www.unidata.ucar.edu/software/netcdf/
(6) https://datacite.org/
(7) https://github.com/AtMoDat/atmodat_data_checker

How to cite: Heil, A., Lammert, A., and Ganske, A.: How to Prepare Atmospheric Model Data for Publication with the ATMODAT Standard, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11046, https://doi.org/10.5194/egusphere-egu22-11046, 2022.

EGU22-11102 | Presentations | ESSI3.1

Joining Geo Data Across Different Providers to Ease Machine Learning Applications 

Matthes Rieke, Benedikt Gräler, and Simon Jirka

Data integration and harmonization has been a tedious task ever since. The increase of available data in volume and variety has further increased the need for a thorough data integration. Furthermore, the application of more and more automatic algorithms stresses the need for a sensible geo data platform to avoid the ‘garbage in, garbage out’ trap and to allow for a meaningful data analysis. We reviewed different projects and learned about various needs and constraints of joint spatial research data infrastructures from local to cloud based deployments. Typically, these systems are not designed from scratch and existing systems need to be integrated or interfaced. As a result or arising from the need to support the sovereignty of distributed data centers, modern infrastructures need to be capable to support federated set-ups. Often these research data infrastructures shall not only be used to store raw data for scientists, but will also provide results (maps, derived data products, tools and applications) to the public. This goes along with the need for access delegation (e.g. OAuth). A special focus is put on the provision of the joint datasets for machine learning applications. In order to facilitate efficient learning and prediction a ML processing environment needs to be aligned with the data infrastructure. 

We will present commonalities among these infrastructures and outline typical design patterns. A spatial data infrastructure based on open source software components that can be deployed on the cloud will be introduced. It features open standardized interfaces and services for easy adaptation and connectivity.

How to cite: Rieke, M., Gräler, B., and Jirka, S.: Joining Geo Data Across Different Providers to Ease Machine Learning Applications, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11102, https://doi.org/10.5194/egusphere-egu22-11102, 2022.

EGU22-12105 | Presentations | ESSI3.1

User Identification and Authentication for Geophysical Data Centers:  Exploring a Difficult Transition 

Florian Haslinger, Jerry Carter, Helle Pedersen, Jonathan Schaeffer, Robert Casey, Javier Quinteros, and Angelo Strollo

Many geophysical data centers are being asked by their sponsors and funding agencies to provide information on what data and services are used by whom and for what purpose in greater detail than customary in the past, when bulk information about the number of users/accesses and volumes of download were deemed sufficient in most cases. Up to now, data centers generally offer anonymous access to large parts of their holdings, with different approaches to basic monitoring and access logging, e.g. by IP address, as a rough proxy, that allows one to infer geographical user distribution to some detail. 

Already today, access to embargoed or otherwise restricted data, or to advanced functions like personal work spaces and computational resources, is usually protected by user authentication and authorisation. Standardization of the identity management protocols is a requirement for further supporting the federation of data centers and their services, also in light of future integration with cloud services or other integrated services. For example in seismology, federated data retrieval systems follow a specific credential process based on standards for data exchange and web services established and maintained by the International Federation of Digital Seismograph Networks (FDSN). 

These new information requirements from funding agencies would, however, require implementing identity management systems and some sort of user identification / authentication to many or all data center services and resources. This is raising concerns within the data centers on a number of aspects: Evidence from other domains demonstrates that requiring authentication reduces the use of data center services; enforcing authentication is often perceived as being not in line with best practices for open science; implementing identity management for usage profiling may lead to significantly increased effort at the data centers, especially with regard to compliance with data protection legislation like GDPR, and it may significantly impede automated (scripted) machine-to-machine access; the level of detail that should be reported back to funding agencies is unclear and there are doubts whether detailed user profiling is a reasonable ‘performance indicator’. Indeed, such knowledge gathering on users needs to be obtained through technical implementations that take into account the impact on user experience, the impact on decades of research tool development, and the resources necessary to implement and operate such systems, whether embedded into the operational services or taking other forms such as surveys and outreach to user groups.

Relevant discussions have now started among representatives of major geophysical data centers so that interim plans can be shared, ideas and experiences exchanged, and standard approaches can be developed and recommended for consideration by the community. In these discussions we consider both scenarios where identity management is useful and relevant or where we may consolidate our views and arguments with respect to the general user data reporting requests.

How to cite: Haslinger, F., Carter, J., Pedersen, H., Schaeffer, J., Casey, R., Quinteros, J., and Strollo, A.: User Identification and Authentication for Geophysical Data Centers:  Exploring a Difficult Transition, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12105, https://doi.org/10.5194/egusphere-egu22-12105, 2022.

EGU22-12187 | Presentations | ESSI3.1

FAIR WISH - FAIR Workflows to establish IGSN for Samples in the Helmholtz Association 

Linda Baldewein, Kirsten Elger, Birgit Heim, Alexander Brauser, Simone Frenzel, Ulrike Kleeberg, and Ben Norden

The International Geo Sample Number (IGSN) is a globally unique and persistent identifier (PID) for physical samples and collections with discovery function on the Internet. IGSNs enable to directly link data and publications with samples they originate from and thus close the last gap in the full provenance of research results. The modular IGSN metadata schema has a small number of mandatory and recommended metadata elements that can be individually extended with discipline-specific elements.

Based on three use cases that represent all states of digitisation - from individual scientists, collecting sample descriptions in their field books to digital sample management systems fed by an app that is used in the field - FAIR WISH will (1) develop standardised and discipline specific IGSN metadata schemes for different sample types from the Earth and Environment Sciences, (2) develop workflows to generate machine-readable IGSN metadata from different states of digitisation, (3) develop workflows to automatically register IGSNs and (4) prepare the resulting workflows for further use in the Earth Science community.

How to cite: Baldewein, L., Elger, K., Heim, B., Brauser, A., Frenzel, S., Kleeberg, U., and Norden, B.: FAIR WISH - FAIR Workflows to establish IGSN for Samples in the Helmholtz Association, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12187, https://doi.org/10.5194/egusphere-egu22-12187, 2022.

EGU22-12563 | Presentations | ESSI3.1

Tools and incentives for the curation of geosciences data. Experiences from GFZ Data Services 

Florian Ott, Kirsten Elger, and Simone Frenzel

GFZ Data Services, hosted at the GFZ German Research Centre for Geosciences (GFZ), is a domain repository for geosciences data that assigns digital object identifier (DOI) to data and scientific software since 2004 and is Allocating Agent for IGSN, the globally unique persistent identifier for physical samples, providing IGSN minting services for physical samples since 2012. The repository provides DOI minting services for several global monitoring networks/observatories in geodesy and geophysics (e.g. INTERMAGNET; IAG Services ICGEM, IGETS, IGS; GEOFON), collaborative projects (TERENO, EnMAP, GRACE, CHAMP) and the curation of long-tail data by domain specialists.

Provision of (1) comprehensive domain-specific data description via standardised and machine-readable metadata with controlled domain vocabularies, (2) complementing the metadata with comprehensive and standardised technical data descriptions or reports; and (3) embedding the research data in wider context by providing cross-references through Persistent Identifiers (DOI, IGSN, ORCID, Fundref) to related research products (text, data, software) and people or institutions involved are used by GFZ Data Services to increase the interoperability of long-tail data.

For their data and software publication activities, GFZ Data Services provides an XML metadata editor18 that assists scientists to create metadata in different international metadata schemas (ISO19115, DataCite), while being at the same time usable by and understandable for the scientists (Ulbricht et al., 2017, 2020). Together with the new website launch of GFZ Data Services in 2022 user guidance has increased significantly and the website has further developed from a searchable data portal (only) to an information point for data publications and data management. This includes information on metadata, data formats, the data publication workflow, FAQ, links to different versions of our metadata editor and downloadable data description templates.

How to cite: Ott, F., Elger, K., and Frenzel, S.: Tools and incentives for the curation of geosciences data. Experiences from GFZ Data Services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12563, https://doi.org/10.5194/egusphere-egu22-12563, 2022.

EGU22-12874 | Presentations | ESSI3.1

The AuScope Geochemistry Network: Facilitating Better Organisation, Coordination and Ability to Share Data Produced by Australian Geochemistry Laboratories 

Bryant Ware, Alexander Prent, Samual Boone, Hayden Dalton, Guillaume Florin, Yoann Greau, Fabian Kohlmann, Moritz Theile, Wayne Noble, Erin Matchan, Barry Kohn, Andrew Gleadow, and Brent McInnes

One of the greatest challenges in the global geochemistry community is to aggregate and make the large amounts of geochemical data generated by laboratories FAIR [Findable, Accessible, Interoperable, and Reusable] and publicly available the large amounts of data generated in laboratories. Standardisation and data organisation has often been an individual or voluntary/uncoordinated effort and/or motivated by the likelihood of immediate/near-future publication. Along with the technical challenges of getting laboratory data into a well-structured relational database and linked to samples’ metadata, societal and cultural issues are often present around the standardisation and accessibility of data reporting (e.g. equipment manufacturer, funding body proprietary data outputs, data reduction software accessibility and requirements/“data ownership” of the users/scientists).

 

In response to a national expression of a need to address the challenges outlined above and for better organisation and coordination of Australian geochemistry laboratories and data, AuScope funded the AuScope Geochemistry Network (AGN) in 2019. The AGN comprises a team of researchers, data-scientists, and technical staff from three universities across Australia; Curtin University, the University of Melbourne, and Macquarie University, tasked in coordinating and strategizing the best approach to:

  • Unite the diverse Australian geochemistry community.
  • Promote national capability (existing geochemical capability).
  • Promote investment in infrastructure (new, advanced geochemical infrastructure).
  • Support increased end user access to laboratory facilities.
  • Support professional development via online tools, training courses and workshops.
  • Preserving legacy data sets

 

Over the last two years the AGN has worked to organise the geochemistry community and provide solutions to the integration and adoption of international best practices for data management. With the ‘end in mind’ the AGN and collaborator Lithodat have developed the AusGeochem platform, a unique research data platform that services laboratory needs, bridges the gap between sample metadata and analytical data as well as strengthens the user-laboratory connection. To establish data reporting tables that fit the community’s need, yet facilitate FAIR data practices and integrating international best practices for handling geochemistry data, the AGN led and coordinated Expert Advisory Groups composed of geochemical specialists from a number of Australian institutions. Along with the AusGeochem platform that allows laboratories to upload, archive, disseminate and publish their datasets; the AGN has built LabFinder, a web application tool that helps geoscience users find and access the right laboratory and analytical technique to solve their research questions. LabFinder aims to continue to support end user access to laboratory facilities leading to the improvement in the capability and capacity of geochemistry laboratories on a national scale. In the coming two years AGN will continue to build upon these accomplishments by expanding the AGN data partnerships through the on boarding of institutions hosting major geochemistry laboratories, further facilitating collaborations between Australian geochemistry laboratories.

How to cite: Ware, B., Prent, A., Boone, S., Dalton, H., Florin, G., Greau, Y., Kohlmann, F., Theile, M., Noble, W., Matchan, E., Kohn, B., Gleadow, A., and McInnes, B.: The AuScope Geochemistry Network: Facilitating Better Organisation, Coordination and Ability to Share Data Produced by Australian Geochemistry Laboratories, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12874, https://doi.org/10.5194/egusphere-egu22-12874, 2022.

EGU22-13354 | Presentations | ESSI3.1

FID GEO: Promoting Open Science in the Geosciences 

Melanie Lorenz, Kirsten Elger, Inke Achterberg, Marcel Meistring, Norbert Pfurr, and Malte Semmler

The change towards Open Science practices is increasingly demanded by science policy and affects the publication culture as well as the information infrastructures. This includes the transition to Open Access for journals and publishers (including the development of new business models), as well as the growing need to make data, scientific software and samples underlying scientific results available for the general public.  

The DFG-funded Specialised Information Service for Geoscience FID GEO (Fachinformationsdienst Geowissenschaften), aims at (1) reducing structural deficits in the area of electronic information and (2) promoting Open Science throughout the research life cycle. The service is hosted at the Göttingen State and University Library in Lower Saxony (SUB Göttingen) and the GFZ German Research Centre for Geosciences in Potsdam. The FID GEO team is made up of highly connected librarians, data publishing professionals, and geoscientists. Over the past five years, FID GEO has become a key player for the promotion of Open Science in the geosciences and occupies a central position for connecting researchers, data repositories, information infrastructures, German geosciences societies and publishers. FID GEO actively offers data and text publication services via their associated repositories GFZ Data Services and GEO-LEOe-docs as well as digitisation on demand of print-only geoscience literature and maps.

FID GEO aims at informing the geoscientific community about all aspects of Open Science on one hand, and is available for questions and support, e.g., during data publications, the transition to an Open Access model for journals of geosciences societies or getting a DOI for an article in the Green Open Access model. An online questionnaire in 2021 revealed that there is a high demand for information. This regards particularly topics such as licenses, persistent identifiers (ORCID, ROR, IGSN) and measures to ensure data quality and integrity in order to enable high quality, citable data publications. In the first funding phases of the project, workshops and talks have proven to be very successful tools to meet the large need for discussion, as they allow to directly address questions or uncertainties regarding practical aspects. Information events are prepared specifically for the individual target groups: researchers, German geosciences societies and members of infrastructural support units, like libraries (e.g., while societies are more interested in the development of guidelines, librarians have specific interest in licensing and copyright issues). To intensify the open information culture in the geosciences, FID GEO collaborates with strategic (inter)national initiatives, such as NFDI4Earth, COPDESS and OneGeochemistry.

How to cite: Lorenz, M., Elger, K., Achterberg, I., Meistring, M., Pfurr, N., and Semmler, M.: FID GEO: Promoting Open Science in the Geosciences, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13354, https://doi.org/10.5194/egusphere-egu22-13354, 2022.

EGU22-6332 | Presentations | ESSI3.2

Implementation of a FAIR Compliant Automated Workflow for Infrastructures 

Ulrich Bundke, Marcel Kennert, Christoph Mahnke, Susanne Rohs, and Andreas Petzold

The European infrastructure In-service Aircraft for a Global Observing System (IAGOS) (www.IAOGS.org) has implemented an automatic workflow for data management organizing the dataflow starting at the sensor towards the central data-portal located in Toulouse. The workflow is realized and documented using the web-based Django framework with a model-based approach using Python.

This workflow performs all necessary data processing and QA/QC tests to automated upload NRT processed data and serves the PI as basis for approval decisions. This includes repeated cycles for different stages of data maturity. The PI can monitor the status of all tasks by web-based reports produced by the Task Manager.  An automated reprocessing is possible by storing metadata on all steps as well as decisions of the PI. The implementation of the workflow is one big step to make IAGOS data handling compliant with the FAIR principles (findable, accessible, interoperable, reusable).

The workflow is easy adaptable to manage the workflow of other Infrastructures or research institutes. Thus, we will open the development under MIT license and invite other datacenters to contribute to the development.

Acknowledgments:

This work was supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 824068 and by Helmholtz STSM Grant “DIGITAL EARTH”

How to cite: Bundke, U., Kennert, M., Mahnke, C., Rohs, S., and Petzold, A.: Implementation of a FAIR Compliant Automated Workflow for Infrastructures, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6332, https://doi.org/10.5194/egusphere-egu22-6332, 2022.

EGU22-7228 | Presentations | ESSI3.2

French feedback from urban soil geochemical data archive to data sharing: state of mind and intent 

Cecile Le Guern, Jean-François Brunet, Philippe Négrel, Sandrine Lemal, Etienne Taffoureau, Sylvain Grellet, Mickael Beaufils, Clément Lattelais, Christine Le Bas, and Hélène Roussel

Urban territories collect many types of geochemical and physico-chemical data relative to, e.g., soil quality or soil functions. Such data may serve for various purposes like verifying the compatibility with current or future uses, defining (pedo)geochemical backgrounds, establishing levels of exposure to soil pollutants, identifying management options for polluted sites or for excavated soils, verifying the evolution of infiltration ponds, assessing carbon storage, etc. They may also serve to prioritize soil functions and associated ecosystem services such as, e.g., soil fertility, surface and groundwater storage or supply, purification of infiltrated rainwater, etc. Gathering such data in national databases and making them available to stakeholders raises many issues that are technical, legal and social.  Should all of the data be made available or only selected portions? How can access and reuse of the data be ensured in a legal fashion? Are statistical and geostatistical methods able to deal with data from heterogeneous origins, allowing their reuse for other purposes than the initial one? In this context, it is necessary to take into account scientific as well as practical considerations and to collect the societal needs of end-users like urban planners.

 

To illustrate the complexity of these issues and ways to address them, we propose to share the French experience:

  • on gathering urban soil geochemical data in the French national database BDSolU. We will present how this database was created, the choices made in relation with the national context, the difficulties encountered, and the questions that are still open.
  • on a new interrogation system linking agricultural and urban soil databases (DoneSol and BDSolU), which have different requirements, and the corresponding standards. Such linkage based on interoperability is important in the context of changes of soil use, with for example agricultural soils becoming urbanised soils, or soils from brownfields intended for gardening. It is also necessary to ensure a territorial continuity for users.

The objective is to define a robust and standardised methodology for database conceptualisation, sharing and final use by stakeholders including scientists

How to cite: Le Guern, C., Brunet, J.-F., Négrel, P., Lemal, S., Taffoureau, E., Grellet, S., Beaufils, M., Lattelais, C., Le Bas, C., and Roussel, H.: French feedback from urban soil geochemical data archive to data sharing: state of mind and intent, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7228, https://doi.org/10.5194/egusphere-egu22-7228, 2022.

EGU22-8262 | Presentations | ESSI3.2

Data Access Made Easy: flexible, on the fly data standardization and processing 

Mathias Bavay, Charles Fierz, and Rodica Nitu

Automatic Weather Stations (AWS) deployed in the context of research projects provide very valuable data thanks to the flexibility they offer in term of measured meteorological parameters, choice of sensors and quick deployment and redeployment. However this flexibility is a challenge in terms of metadata and data management. Traditional approaches based on networks of standard stations can not accommodate these needs and often no tools are available to manage these research AWS, leading to wasted data periods because of difficult data reuse, low reactivity in identifying potential measurement problems, and lack of metadata to document what happened.

The Data Access Made Easy (DAME) effort is our answer to these challenges. At its core, it relies on the mature and flexible open source MeteoIO meteorological pre-processing library. It was originally developed as a flexible data processing engine for the needs of numerical models consuming meteorological data and further developed as a data standardization engine for the Global Cryosphere Watch (GCW) of the World Meteorological Organization (WMO). For each AWS, a single configuration file describes how to read and parse the data, defines a mapping between the available fields and a set of standardized names and provides relevant Attribute Conventions Dataset Discovery (ACDD) metadata fields, if necessary on a per input file basis. Low level data editing is also available, such as excluding a given sensor, swapping sensors or merging data from another AWS, for any given time period. Moreover an arbitrary number of filters can be applied on each meteorological parameter, restricted to specific time periods if required. This allows to describe the whole history of an AWS within a single configuration file and to deliver a single, consistent, standardized output file possibly spanning many years, many input data files and many changes both in format and available sensors. Finally, all configuration files are kept in a git repository in order to document their history.

A basic email-based interface has been developed that allows to create new configuration files, modify an existing configuration file or request data on-demand for any time period. Every hour, the data for all available configuration files is regenerated for the last 13 months and stored on a shared drive so all are able to access the current data without even having to submit a request. A table is generated showing all warnings or errors produced during the data generation along with some metadata such as the data owner email in order for the data owner to quickly spot troublesome AWS.

How to cite: Bavay, M., Fierz, C., and Nitu, R.: Data Access Made Easy: flexible, on the fly data standardization and processing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8262, https://doi.org/10.5194/egusphere-egu22-8262, 2022.

The TRR170-DB data repository (https://planetary-data-portal.org/) is a Re3data (r3data.org) referenced repository that manages new machine-readable data and resources from the collaborative research center ‘Late Accretion onto Terrestrial Planets’ (TRR 170) and from other institutions in in the planetary science community. Data in the repository reflect the diverse methods and approaches applied in the planetary sciences, including astromaterials data, experimental studies, remote sensing data, images and geophysical modeling data. The TRR170-DB repository follows a data policy and practice that supports Open Science and the FAIR principles (Wilkinson et al., 2016) as promoted by the German National Research Data Infrastructure (www.nfdi.de) and various national and international funding agencies and initiatives. The TRR170-DB framework supports users to align their data storage with the data life cycle of data sharing, persistent data citation, and data publishing. The permanent host of the TRR170-DB is Freie Universität Berlin. This long-term preservation and access of TRR170-DB’s published data ensures them being reused by researchers and the interested public.

The TRR170-DB repository is operated on the open source data management software Dataverse (dataverse.org). A web portal provides access to the storage environment of the datasets. The web portal guides users through the process of data storage and publication. It also informs about legal conditions and embargo periods to safeguard the data publication process. Additional information is available informing the user about data management and data publication related news and training events.

A user can search metadata information to find specific published data collections and files without logging in to TRR170-DB. A recently integrated new tool, the data explorer, assists the user in advanced searches to browse and find published data content. Data suppliers receive data curation services, a permanent archive and a digital object identifier (DOI) to make the dataset unique and findable. We encourage TRR 170 members and other users to store replication datasets by implementing publishing workflows to link publications to data. These replication datasets are freely available, and no permission is required for reuse and verification of a study. TRR170-DB has a flexible data-driven metadata system that uses tailored metadata blocks for specific data communities. Once a dataset has been published, its metadata and files can be exported in various open metadata standards and file formats. This ensures that all data published in the repository are generally accessible for other external databases and repositories (“interoperability”).

We are currently expanding metadata templates to improve interoperability, findability, preservation, and reuse of geochemical data in TRR170-DB. New geochemical metadata templates will incorporate additional standardized information on samples and materials, analytical methods and additional experimental data.  Advancing metadata templates will be an ongoing process in which the international scientific community and various initiatives (OneGeochemistry, Astromaterials Data System, etc.) need to interact and discuss what is required.

How to cite: Lehmann, E. and Becker, H.: The TRR170-DB Data Repository: The Life Cycle of FAIR Planetary Data from Archive to Publication, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9960, https://doi.org/10.5194/egusphere-egu22-9960, 2022.

As volumes of geoanalytical data grow, research in geochemistry, volcanology, petrology, and other disciplines working with geoanalytical data is evolving to data-driven and computational approaches that have enormous potential to lead to new scientific discoveries. Application of advanced methods for data mining and analysis including Machine Learning, and Artificial Intelligence, as well as the generation of models for simulating natural processes all require seamless machine-readable access to large interoperable stores of consistently structured and documented geochemical data. Standard protocols, formats, and vocabularies are also critical in order to process, manage, and publish these growing data volumes efficiently with seamless workflows that are supported by interoperable tools.

Today, easy integration of data into Analysis Ready Data stores and the successful and efficient application of new research methodologies to these data stores is hindered by the fragmentation of the international geochemical data landscape that lacks the technical and semantic standards for interoperability; organizational structures to guide and govern these standards; and a scientific culture that supports and prioritizes a global sustainable data infrastructure. In order to harness the scientific treasures hidden in BIG volumes of geochemical data, the science community, geochemistry data providers, publishers, funders, and other stakeholders need to come together to develop, implement, and maintain standards and best practices for geochemical data, and commit to changing the current data culture in geochemistry. The benefits will be wide-ranging and increase the relevance of the discipline. 

Although many research data initiatives today focus on the implementation of the FAIR principles for Findable, Accessible, Interoperable, and Reusable data, most data is only human-readable, even though the original purpose of the FAIR principles has been to make data machine-actionable. The development of standards today should not focus on spreadsheet templates used to format and compile project-centric databases that are hard to re-purpose. These methods are not scalable. The focus should be on global solutions where any digital data are born connected to agreed machine readable standards so that researchers can utilize the latest AI and ML techniques.

Global standards for geochemical data should not be perceived as ‘one ring to rule them all’, but rather as a series of interoperable ‘rings’ of data, which like the Olympic rings will integrate data from the all continents and nations.



How to cite: Lehnert, K. and Wyborn, L.: Global Data Standards for Geochemistry: Not the ‘One Ring to Rule Them All’, but a set of ‘Olympic Rings’ that Link and Integrate across Continents, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10726, https://doi.org/10.5194/egusphere-egu22-10726, 2022.

EGU22-11103 | Presentations | ESSI3.2

Data amounts and reproducibility: How FAIR Digital Objects can revolutionise Research Workflows 

Ivonne Anders, Karsten Peters-von Gehlen, Hannes Thiemann, Martin Bergemann, Merret Buurman, Andrej Fast, Christopher Kadow, Marco Kulüke, and Fabian Wachsmann

Some disciplines, especially those that look at the Earth system, work with large to very large amounts of data. Storing this data, but also processing it, places completely new demands on scientific work itself.

Let's take the example of climate research and specifically climate modelling. In addition to long-term meteorological measurements in the recent past, results from climate models form the main basis for research and statements on past and possible future global, regional and local climate. Climate models are very complex numerical models that require high-performance computing. However, with the current and future increasing spatial and temporal resolution of the models, the demand for computing resources and storage space is also increasing. Previous working methods and processes no longer hold up and need to be rethought.

Taking the German Climate Computing Centre (DKRZ) as an example, we analysed the users, their goals and working methods. DKRZ provides the climate science community with resources such as high-performance computing (HPC), data storage and specialised services and hosts the World Data Center for Climate (WDCC). In analysing users, we distinguish between two groups: those who need the HPC system to run resource-intensive simulations and then analyse them, and those who reuse, build on and analyse existing data. Each group subdivides into subgroups. We have analysed the workflows for each identified user and found identical parts in an abstracted form and derived Canonical Workflow Modules.

In the process, we critically examined the possible use of so-called FAIR Digital Objects (FDOs) and checked to what extent the derived workflows and workflow modules are actually future-proof.

The vision is that the global integrated data space is formed by standardised, independent and persistent entities that contain all information about diverse data objects (data, documents, metadata, software, etc.) so that human and, above all, machine agents can find, access, interpret and reuse (FAIR) them in an efficient and cost-saving way. At the same time, these units become independent of technologies and heterogeneous organisation of data, and will contain a built-in mechanism that supports data sovereignty. This will make the handling of data sustainable and secure.

So, each step in a research workflow can be a FDO. In this case, the research is fully reproducible, but parts can also be exchanged and, e.g. experiments can be varied transparently. FDOs can easily be linked to others. The redundancy of data is minimised and thus also the susceptibility to errors is reduced. FDOs open up the possibility of combining data, software or whole parts of workflows in a new and simple but at all times comprehensible way. FDOs will make an important contribution to the reproducibility of research results, but they are also crucial for saving storage space. There are already data that are FDOs, but also self-contained frameworks that store data via tracking workflows. Similar to the TCP/IP standard, DO interface protocols are already developed. However, there are still some open points that are currently being worked on and defined with regard to FDOs in order to make them a globally functioning system.

How to cite: Anders, I., Peters-von Gehlen, K., Thiemann, H., Bergemann, M., Buurman, M., Fast, A., Kadow, C., Kulüke, M., and Wachsmann, F.: Data amounts and reproducibility: How FAIR Digital Objects can revolutionise Research Workflows, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11103, https://doi.org/10.5194/egusphere-egu22-11103, 2022.

EGU22-11348 | Presentations | ESSI3.2

GEOROC and EarthChem: Optimizing Data Services for Geochemistry through Collaboration 

Marthe Klöcking, Kerstin Lehnert, Lucia Profeta, Bärbel Sarbas, Jan Brase, Sean Cao, Juan David Figueroa, Wolfram Horstmann, Peng Ji, Annika Johansson, Leander Kallas, Stefan Möller-McNett, Mariyam Mukhumova, Jens Nieschulze, Adrian Sturm, Hannah Sweets, Matthias Willbold, and Gerhard Wörner

Geochemical data are fundamental to understanding many planetary and environmental processes – yet in the absence of a community-endorsed data culture that adheres to common data standards, the geochemical data landscape is highly fragmented. The GEOROC and PetDB databases are leading, open-access resources for geochemical and isotopic rock and mineral data that have collaborated for nearly 25 years to provide researchers with access to large volumes of curated and harmonized data collections. PetDB is a global synthesis of published chemical, isotopic and mineralogical data for rocks, minerals and melt inclusions with a focus on data for igneous and metamorphic rocks from the ocean floor, ophiolites, xenolith samples from the Earth's mantle and lower crust and tephra, operated by the EarthChem data facility. Its counterpart, GEOROC hosts a collection of published analyses of volcanic and plutonic rocks, minerals and mantle xenoliths, predominantly derived from ocean islands and continental settings. These curated, domain-specific databases are increasingly valuable to data-driven and interdisciplinary research and form the basis of hundreds of new research articles each year across numerous earth data disciplines. 

Over the last two decades, both GEOROC and EarthChem have invested great efforts into operating data infrastructures for findable, accessible, interoperable and reusable data, while working together to develop and maintain the EarthChem Portal (ECP) as a global open data service to the geochemical, petrological, mineralogical and related communities. The ECP provides a single point of access to >30 million analytical values for >1 million samples, aggregated from independently operated databases (PetDB, NAVDAT, GEOROC, USGS, MetPetDB, DARWIN). Yet one crucial element of FAIR data is still largely missing: interoperability across different data systems, that allows data in separately curated databases, such as GEOROC and PetDB, to be integrated into comprehensive, global geochemical datasets.

Both EarthChem and GEOROC have recently embarked on major new developments and upgrades to their systems to improve the interoperability of their data systems. The new Digital Geochemical Data Infrastructure (DIGIS) initiative for GEOROC 2.0 aims to develop a connected platform to meet future challenges of digital data-based research and provide advanced services to the community. EarthChem has been developing an API-driven architecture to align with growing demands for machine-readable, Analysis Ready Data (ARD). This has presented an opportunity to make the two data infrastructures more interoperable and complementary. EarthChem and DIGIS have committed to cooperation on system architecture design, data models, data curation, methodologies, best practices and standards for geochemistry. This cooperation will include: (a) joint research projects; (b) optimized coordination and alignment of technologies, procedures and community engagement; and (c) exchange of personnel, data, technology and information. The EarthChem-DIGIS collaboration integrates with the international OneGeochemistry initiative to create a global geochemical data network that facilitates and promotes discovery and access of geochemical data through coordination and collaboration among international geochemical data providers, in close dialogue with the scientific community and with journal publishers.

How to cite: Klöcking, M., Lehnert, K., Profeta, L., Sarbas, B., Brase, J., Cao, S., Figueroa, J. D., Horstmann, W., Ji, P., Johansson, A., Kallas, L., Möller-McNett, S., Mukhumova, M., Nieschulze, J., Sturm, A., Sweets, H., Willbold, M., and Wörner, G.: GEOROC and EarthChem: Optimizing Data Services for Geochemistry through Collaboration, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11348, https://doi.org/10.5194/egusphere-egu22-11348, 2022.

EGU22-11766 | Presentations | ESSI3.2

Implementing semantic data management for bridging empirical and simulative approaches in marine biogeochemistry 

Alexander Schlemmer, Julian Merder, Thorsten Dittmar, Ulrike Feudel, Bernd Blasius, Stefan Luther, Ulrich Parlitz, Jan Freund, and Sinikka T. Lennartz

CaosDB is a flexible semantic research data management system, released as open source software. Its versatile data model and data integration toolkit allows for applications in complex and very heterogeneous scientific workflows and different scientific domains. Successful implementations include biomedical physics [1] and glaciology [2]. Here, we present a recent implementation of a use case in marine biogeochemistry which has a special focus on bridging between experimental work and numerical ocean modelling. CaosDB is used to store, index and link data during different stages of research on the marine carbon cycle: Data from experiments and field campaigns is integrated and mapped onto semantic data structures. This data is then linked to data from numerical ocean simulations. The ocean model, here with a specific focus on natural marine carbon sequestration of dissolved organic carbon (DOC), uses the georeferenced data to evaluate model performance. By simultaneously linking empirical data and the sampled model parameter space together with the model output, CaosDB enhances the efficiency of model development. In the current implementation simulated data is linked to georeferenced DOC concentration data. We plan to expand it to complex data sets including thousands of dissolved organic matter molecular formulae and metagenomes of pelagic microbial communities. The combined management of these heterogeneous data structures with semantic models allows us to perform complex searches and seamlessly connect to automated data analysis pipelines.


[1] Fitschen, T.; Schlemmer, A.; Hornung, D.; tom Wörden, H.; Parlitz, U.; Luther, S. CaosDB—Research Data Management for Complex, Changing, and Automated Research Workflows. Data 2019, 4, 83. https://doi.org/10.3390/data4020083
[2] Schlemmer, A.; tom Wörden, H.; Freitag, J.; Fitschen, T.; Kerch, J.; Schlomann, Y.; ... & Luther, S. Evaluation of the semantic research data management system CaosDB in glaciology. deRSE 2019. https://doi.org/10.5446/42478

How to cite: Schlemmer, A., Merder, J., Dittmar, T., Feudel, U., Blasius, B., Luther, S., Parlitz, U., Freund, J., and Lennartz, S. T.: Implementing semantic data management for bridging empirical and simulative approaches in marine biogeochemistry, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11766, https://doi.org/10.5194/egusphere-egu22-11766, 2022.

EGU22-11980 | Presentations | ESSI3.2

From Field Application to Publication: An end-to-end Solution for FAIR Geoscience Data 

Moritz Theile, Wayne Noble, Romain Beucher, Malcolm McMillan, Samuel Boone, and Fabian Kohlmann

In this abstract we introduce a suite of free applications to produce FAIR consistent, clean and easily available geoscience data for research and industry alike. 

Creation of data starts with sample collection in the field and the assigning of an unique global IGSN sample identifier to samples, these samples are stored along with any subsequent  analytical data in our fine-grained and detailed geochemical data models allowing visualising and publishing acquired datasets. This unique solution has been developed by Lithodat Pty Ltd in conjunction with the AuScope Geochemical Network (AGN), Australian geochemical laboratories and can be accessed by the public on the AusGeochem web platform. 

Using our fully integrated field application users can enter and store all sample details on-the-fly during field collection, the data will be stored in the user's private data collection. Once the researchers return from the field they can log into their account on the browser-based AusGeochem platform and view or edit all collected samples. After running subsequent geochemical analyses on the sample those results, including all metadata, can be stored in the database and attached to the sample. Once uploaded, data can be visualised within AusGeochem, using simple data analytics via technique-specific dashboards and graphs. The data can be shared with collaborators, downloaded in multiple formats and made public enabling FAIR data for the research community. 

Here we show a complete sample workflow example, from collection in the field to the final result as a thermochronology study. Sample analysis using fission track and (U-Th)/He and all associated data will be uploaded and stored in the AusGeochem platform. Once all analyses are complete, the data will be shared with collaborators and made available to the public. An important step during this process is by having an integrated IGSN minting option which will give the sample a unique global sample identifier, making the sample globally discoverable. 

Having all data stored in a clean and curated relational database with very detailed and fine-grained data models gives researchers free access to large amounts of structured and normalised data, helping them develop new technologies using machine learning and automated data integration in numerical models. Having all data in one place including all metadata such as ORCIDs from involved researchers, funding sources, grant numbers and laboratories enables the quantification and quality assessment of research projects over time.

How to cite: Theile, M., Noble, W., Beucher, R., McMillan, M., Boone, S., and Kohlmann, F.: From Field Application to Publication: An end-to-end Solution for FAIR Geoscience Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11980, https://doi.org/10.5194/egusphere-egu22-11980, 2022.

EGU22-12096 | Presentations | ESSI3.2

Identification and Long-lasting Citability of Dynamic Data Queries on EMSO ERIC Harmonized Data 

Ivan Rodero, Andreu Fornós, Raul Bardaji, Stefano Chiappini, and Juanjo Dañobeitia

The European Multidisciplinary Seafloor and water-column Observatory (EMSO) European Research Infrastructure Consortium (ERIC) is a large-scale European Strategy Forum on Research Infrastructure (ESFRI) member with strategically placed sea observatories with the essential scientific objective of real-time, long-term monitoring of environmental processes related to the interaction between the geosphere, biosphere, and hydrosphere. EMSO ERIC collects, curates, and provides high-quality oceanographic measurements from surface to deep seafloor to assess long-term time series and oceanographic modeling. In addition, EMSO ERIC has developed a set of data services that harmonize its regional facilities’ data workflows, enhancing efficiency and productivity, supporting innovation, and enabling data- and knowledge-based discovery and decision-making. These services are developed in connection with the ESFRI cluster of Environmental Research Infrastructures (ENVRI) through the adoption of FAIR data principles (findability, accessibility, interoperability, and reusability) and supported by the ENVRI-FAIR H2020 project. 

EMSO ERIC’s efforts in adopting FAIR principles include the use of globally unique and resolvable persistent identifiers (PIDs) in alignment with the ENVRI-FAIR task forces. We present a service for the identification and long-lasting citability of dynamic data queries on harmonized data sets generated by EMSO ERIC users. The service is aligned with the Research Data Alliance (RDA) working group on data citation and has been integrated into the EMSO ERIC data portal. User-built queries on the data portal are served by the EMSO ERIC Application Programming Interface (API), which retrieves the user requested data and provides a Uniform Resource Identifier (URI) to the query, visualizations, and data sets in CSV and NetCDF formats. The data portal allows users to request a PID to the data query by providing mandatory and optional metadata information through an online form. The mandatory metadata consists of the description of the data and specific information about the creators, personal or organizational, including their identifiers and affiliations. The optional metadata consists of different types of titles and descriptions that the user finds compelling. The service provides a permalink to a web page maintained within the data portal with the PID reference, metadata information, and the URI to the data query. The web pages associated with PIDs also provide the option to request a Digital Object Identifier (DOI) if users are authorized via the EMSO ERIC Authorization and Authentication Infrastructure (AAI) system.

How to cite: Rodero, I., Fornós, A., Bardaji, R., Chiappini, S., and Dañobeitia, J.: Identification and Long-lasting Citability of Dynamic Data Queries on EMSO ERIC Harmonized Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12096, https://doi.org/10.5194/egusphere-egu22-12096, 2022.

EGU22-13188 | Presentations | ESSI3.2

The UCLA Cosmochemistry Database 

Bidong Zhang, Paul H. Warren, Alan E. Rubin, Kerstin Lehnert, Lucia Profeta, Annika Johansson, Peng Ji, Juan David Figueroa-Solazar, and Jennifer Mays

The UCLA Cosmochemistry Database was initiated as a data rescue project aiming to archive a variety of cosmochemical data acquired at the University of California, Los Angeles. The database will ensure that future studies can use and reference these data in the examination, analysis and classification of new extraterrestrial samples.

The database is developed in collaboration with the Astromaterials Data System (AstroMat) that will provide persistent access to and archiving of the database. The database is a project in progress. We will continue to make additions, updates, and improvements to the database.

The database includes elemental compositions of extraterrestrial materials (including iron meteorites, chondrites, Apollo samples, and achondrites) analyzed by John T. Wasson, Paul H. Warren and their coworkers using atomic absorption spectrometry (AAS), neutron activation analysis (NAA), and electron microprobe analysis (EMPA) at UCLA over the last six decades. The team started to use INAA to analyze iron meteorites, lunar samples, and stony meteorites starting from the late 1970s [1]. Some achondrites and lunar samples were analyzed by EMPA. Some of the UCLA data have been published, but most of the data were neither digitized nor stored in a single repository.

Compositional data have been compiled by the UCLA team from publications, unpublished files, and laboratory records into datasets using Astromat spreadsheet templates. These datasets are submitted to the Astromat repository. Astromat curators review the datasets for metadata completeness and correctness, register them with DataCite to obtain a DOI and make them citeable, and package them for long-term archiving. To date, we have compiled data from 52 journal articles; each article has its own separate dataset. Data and metadata of these datasets are then incorporated into the Astromat Synthesis database.

The UCLA datasets are publicly accessible at the Astromat Repository, where individual datasets can be searched and downloaded. The UCLA cosmochemical data can also be accessed as part of the Astromat Synthesis database, where they are identified as a special ‘collection’. Users may search, filter, extract, and download customized datasets via the user interface of the Astromat Synthesis database (AstroSearch).  Users will be able to access the UCLA Cosmochemistry Database directly from the home page of AstroMat (https://www.astromat.org/).

We plan to include EMPA data of lunar samples and achondrites, and add scanned PDF files of laboratory notebooks and datasheet binders that are not commonly published in journals. These PDF files contain information on irradiation date, mass, elemental concentrations, and classification for each iron specimen, and John Wasson’s personal notes on meteorites. We will also add backscattered-electron (BSE) images, energy dispersive spectroscopy (EDS) images, and optical microscopy images.

The Astromat team is currently working to develop plotting tools for the interactive tables.

Acknowledgments: We thank John Wasson and his coworkers for collecting the cosmochemical data for the last 60 years. Astromat acknowledges funding from NASA (grant no. 80NSSC19K1102).

References: [1] Scott E.R.D et al. (1977) Meteoritics, 12, 425–436.

How to cite: Zhang, B., Warren, P. H., Rubin, A. E., Lehnert, K., Profeta, L., Johansson, A., Ji, P., Figueroa-Solazar, J. D., and Mays, J.: The UCLA Cosmochemistry Database, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13188, https://doi.org/10.5194/egusphere-egu22-13188, 2022.

EGU22-13317 | Presentations | ESSI3.2

The critical role of unique identification of samples for the geoanalytical data pipeline 

Kerstin Lehnert, Jens Klump, Sarah Ramdeen, Kirsten Elger, and Lesley Wyborn

When researchers collect or create physical samples they usually assign a user-generated number to each sample. Subsequently, that sample can be submitted to a laboratory for analysis of a variety of analytes. However, as geoanalytical laboratories are generating ever increasing volumes of data, most laboratories have automated workflows and it is no longer feasible for laboratories to use researcher-supplied sample numbers, particularly as it is not guaranteed that user-supplied numbers will be unique in comparison to numbers submitted by other users to the same laboratory. To address this issue new, laboratory-generated numbers may be assigned to that sample.

Moreover, as a single laboratory rarely has the capability to offer all analytical techniques, individual samples tend to move from laboratory to laboratory to acquire the desired suite of analytes.  Each laboratory may implement a different number to that sample. At the conclusion of their project, the researcher may submit the same sample to a museum or institutional repository, where the sample will be assigned yet another institution-generated number to ensure that all samples are uniquely identified in their repository. 

Ultimately, by the time the researcher submits an article to a journal and wants to identify samples in the text or tables, they may have a multitude of locally-generated numbers to choose from. Not one of the locally assigned numbers to that sample can be guaranteed to be globally unique. It is also unlikely that any of these local numbers will be persistent over the longer term (decades), or be resolvable to enable the location of the identified resource or any information about it elsewhere on the web (metadata, landing page, services related to it, etc).

Globally unique, persistent, resolvable identifiers such as the IGSN play a critical role in the unique identification of geoanalytical samples that pass between systems and organisations: they cannot be duplicated by another researcher, laboratory or sample repository. They persistently link to information about the origin of the sample; to personas in the creation of the sample (collector, institution, funder); to the laboratory data and their creation (analyst, laboratory, institution, funder, data software); and to the sample curation phase (curator, repository, funder). They connect the phases of a sample’s path from collection in the field to lab analysis to the synthesis/research phase to the publication to the archive. Globally unique sample identifiers also enable cross linkages to any artefacts derived from that sample (images, analytical data, other articles). Further, identifiers like IGSN enable sub samples or sample splits to be linked back to their parent sample, creating a holistic picture of any information derived from the initial sample. 

Hence, best practice is clearly to assign the globally unique resolvable identifier to the initial resource. Like a birth certificate, the identifier can be carried through the progressive stages of the research ‘life-cycle’ including laboratory analysis, generation of further data, images, publication, and ultimately curation and preservation. Where any subsamples are derived, they, and any data generated on them, can be linked back to the parent identifier.

How to cite: Lehnert, K., Klump, J., Ramdeen, S., Elger, K., and Wyborn, L.: The critical role of unique identification of samples for the geoanalytical data pipeline, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13317, https://doi.org/10.5194/egusphere-egu22-13317, 2022.

EGU22-13330 | Presentations | ESSI3.2

EARThD: an effort to make East African tephra geochemical data available and accessible 

Erin DiMaggio, Sara Mana, and Cora VanHazinga
Tephra deposits are excellent chronostratigraphic markers that are prolific and widespread in portions of the East African Rift (EAR). Arguably one of the most powerful applications of tephrochronology is the establishment of regional chronological frameworks, enabling the integrated study of the timescales and interaction of the geosphere, hydrosphere, and biosphere. In order for these disparate disciplines to integrate and fully utilize the growing number of available tephra datasets, infrastructural efforts that centralize and standardize information are required. Of particular importance to these efforts is digitizing and standardizing previously published datasets to make them discoverable in alignment with current FAIR data reporting practices.  

EARThD is a NSF funded data compilation project that has integrated and standardized geochemical and geochronological data from over 400 published scientific papers investigating tephra datasets from the East African Rift. Our team has trained 15 undergraduate students in spreadsheet data entry and management, data mining, scientific paper comprehension, and in East African tephrochronology. We utilize an existing NSF-supported community-based data facility, Interdisciplinary Earth Data Alliance (IEDA), to store, curate, and provide access to the datasets. We are currently working with IEDA to ensure that data generated from EARThD is ingested into the IEDA Petrological Database (PetDB) and ultimately EarthChem, making it broadly available. Here we demonstrate our data entry process and how a user can locate, retrieve, and utilize EARThD tephra datasets. With this effort we aim to preserve available geochemical data for posterity, fulfilling a crucial data integration role for researchers working in East Africa --especially those working at paleontological and archeological sites where tephra dating and geochemical correlations are critical. The EARThD compilation also enables data synthesis efforts required to address new science questions.

How to cite: DiMaggio, E., Mana, S., and VanHazinga, C.: EARThD: an effort to make East African tephra geochemical data available and accessible, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13330, https://doi.org/10.5194/egusphere-egu22-13330, 2022.

EGU22-13338 | Presentations | ESSI3.2

A workflow to standardize collection and management of large-scale data and metadata from environmental observatories 

Dylan O'Ryan, Charuleka Varadharajan, Erek Alper, Kristin Boye, Madison Burrus, Danielle Christianson, Shreyas Cholia, Robert Crystal-Ornelas, Joan Damerow, Wenming Dong, Hesham Elbashandy, Boris Faybishenko, Valerie Hendrix, Douglas Johnson, Zarine Kakalia, Roelof Versteeg, Kenneth Williams, Catherine Wong, and Deborah Agarwal

The Watershed Function Scientific Focus Area (WFSFA) is a U.S. Department of Energy research project that seeks to determine how mountainous watersheds retain and release water, carbon, nutrients, and metals. The WFSFA maintains a community field observatory at its primary field site in the East River, Colorado. The WFSFA collects diverse environmental data and has developed a “Field-Data” workflow that standardizes data management across the project, from field collection to laboratory analysis to publication. This workflow enables the WFSFA to address data quality and management challenges that environmental observatories face. 

Through this workflow, the WFSFA has increased the use of the data curated from the project by (1) providing detailed metadata with unique identifiers for samples, locations, and sensors, (2) streamlining the data sharing and publication process through early sharing of data internally within the team and publication of data on the ESS-DIVE repository following curation, and (3) adopting machine-readable and FAIR community data standards (Findability, Accessibility, Interoperability, Reusability). 

We describe an example application of this workflow for geochemical data, which utilizes a community geochemical data standard for water-soil-sediment chemistry (https://github.com/ess-dive-community/essdive-water-soil-sed-chem) developed by Environmental Systems Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE). This data standard is designed to standardize geochemical data, metadata, and file-level metadata, and was applied to WFSFA geochemical data, including ICP-MS, Isotope, Ammonia-N, Anion, DIC/NPOC/TDN datasets. This ensures important metadata is contained within the data file, such as precision of data analysis, storage and sample processing information, detailed sample names, material information, and unique identifiers associated with the samples (IGSNs). This metadata is essential to understand and reuse data products, as well as enable machine-readability for future model applications. Detailed examples of the standardized geochemical data types were created and are now being used as templates by WFSFA researchers to standardize their geochemical data. The adoption of this community geochemical data standard and more broadly the Field-Data workflow will improve the findability and reusability of WFSFA datasets. 

How to cite: O'Ryan, D., Varadharajan, C., Alper, E., Boye, K., Burrus, M., Christianson, D., Cholia, S., Crystal-Ornelas, R., Damerow, J., Dong, W., Elbashandy, H., Faybishenko, B., Hendrix, V., Johnson, D., Kakalia, Z., Versteeg, R., Williams, K., Wong, C., and Agarwal, D.: A workflow to standardize collection and management of large-scale data and metadata from environmental observatories, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13338, https://doi.org/10.5194/egusphere-egu22-13338, 2022.

Efforts towards standardizing biogeochemical data from palaeoclimate archives such as speleothems, ice cores, corals, trees or marine sediments allow to tackle global-scale changes in palaeoclimate dynamics. These endeavours are sometimes initiated for very specific research questions. One such example is the multi-archive, multi-proxy dataset used in a characterization of changes in temperature variability from the last Glacial Maximum to the current Interglacial [1]. Here, we focused on collecting all published proxy time series for temperature that fulfilled sampling criteria, but we did not include a lot of metadata.

Another, quite prominent, example is the database that grew out of the working group on Speleothem synthesis and analysis (SISAL) in the Past Global Changes (PAGES) network. In its construction, researchers from all over the world collaborated, producing a quality-controlled data product with rich metadata. SISAL v2 [2] contains data from 691 speleothem records published over the decades, for more than 500 standardized age models were established. The design and data collection in the community allowed to draw together metadata and observations to reproduce the age modeling process of individual studies. This database has a rich set of purposes, ranging from the evaluation of monsoon dynamics, to that of isotope-enabled climate models [3].

Contrasting these two approaches I will discuss the challenges arising when multiple proxies, archives, modeling purposes and community standards need to be considered. I argue that careful design of standardized data products allows for a new type of geoscience work, further catalyzed by digitization, forming a basis for tackling future-relevant palaeoclimatic and palaeoenvironmental questions at the global scale. 

 

[1] Rehfeld, K., et al. "Global patterns of declining temperature variability from the Last Glacial Maximum to the Holocene." Nature 554.7692: 356-359, https://doi.org/10.1038/nature25454, 2018

[2] Comas-Bru, L., et al. (incl. SISAL Working Group members): SISALv2: a comprehensive speleothem isotope database with multiple age–depth models, Earth Syst. Sci. Data, 12, 2579–2606, https://doi.org/10.5194/essd-12-2579-2020, 2020.

[3] Bühler, J. C. et al: Comparison of the oxygen isotope signatures in speleothem records and iHadCM3 model simulations for the last millennium, Clim. Past, 17, 985–1004, https://doi.org/10.5194/cp-17-985-2021, 2021.

How to cite: Rehfeld, K.: Science building on synthesis: From standardized palaeoclimate data to climate model evaluation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13382, https://doi.org/10.5194/egusphere-egu22-13382, 2022.

EGU22-13429 | Presentations | ESSI3.2

AusGeochem: an Australian AuScope Geochemistry Network data platform for laboratories and their users 

Alexander M. Prent, Samuel C. Boone, Hayden Dalton, Yoann Gréau, Guillaume Florin, Fabian Kohlmann, Moritz Theile, Wayne Noble, Sally-Ann Hodgekiss, Bryant Ware, David Philips, Barry Kohn, Suzanne O’Reilly, Andrew Gleadow, Brent McInnes, and Tim Rawling

Over the last two years, the Australian AuScope Geochemistry Network (AGN) has developed AusGeochem in collaboration with geoscience-data-solutions company Lithodat Pty Ltd. This open, cloud-based data platform (https://ausgeochem.auscope.org.au) serves as a geo-sample registry, with IGSN minting capability, a geochemical data repository and a data analysis tool. With guidance from experts in the field of geochemistry from a number of Australian institutions, and following international standards and best practices, various sample and geochemistry data models were developed that align with the FAIR data principles. AusGeochem is currently accepting data of SIMS U-Pb as well as of fission track and (U-Th-Sm)/He techniques with LA-ICPS-MS U-Pb and Lu-Hf, 40Ar/39Ar data models under development. Special attention is paid to the implementation of streamlined workflows for AGN laboratories to facilitate ease of data upload from analytical sessions. Analytical results can then be shared with users through AusGeochem and where required can be kept fully confidential and under embargo for specified periods of time. Once the analytical data on individual samples are finalized, the data can then be made more widely accessible, and where required can be combined into specific datasets that support publications.

How to cite: Prent, A. M., Boone, S. C., Dalton, H., Gréau, Y., Florin, G., Kohlmann, F., Theile, M., Noble, W., Hodgekiss, S.-A., Ware, B., Philips, D., Kohn, B., O’Reilly, S., Gleadow, A., McInnes, B., and Rawling, T.: AusGeochem: an Australian AuScope Geochemistry Network data platform for laboratories and their users, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13429, https://doi.org/10.5194/egusphere-egu22-13429, 2022.

MetBase is the world’s largest database for meteorite compositions [1], currently hosted in Germany. MetBase started more than 20 years ago with collecting cosmochemical data by a private collector. Among others, the database consists of more than 500.000 individual data of, for instance, bulk and component chemical, isotopic and physical properties. Further, the database holds more than 90,000 references from 1492 until today. In 2006, the high value of the database was acknowledged by the Meteoritical Society with its Service Award. MetBase has seen substantial transitions in the past years from a purely commercial to a donation, free-of-charge database. The technical foundation has been completely modernised.

More recently, the Astromaterials Data System (AstroMat) has been developed as a data infrastructure to store, curate, and provide access to laboratory data acquired on samples curated in NASA’s Astromaterials Collections. AstroMat is intended to host data from past, present, and future studies. AstroMat is developed and operated by a team that has long-term experiences in the development and operation of data systems for geochemical, petrological, mineralogical, and geochronological laboratory data acquired on physical samples – EarthChem and PetDB.

Astromat and MetBase are two initiatives with two very different histories – but a shared goal. Astromat and MetBase therefore plan a common future. As a part of this, we are currently starting a project to make MetBase data fully FAIR (findable, accessible, interoperable and reusable, [2]), thereby implementing the recently established Astromat database schema [3], which is based on the EarthChem data model. Astromat and MetBase currently also work on new solutions for a long term and centralized hosting of both databases and a data input backbone.

Both MetBase and Astromat participate in the OneGeochemistry initiative, to contribute to the development of  community endorsed and governed standards for FAIR lab analytical data that will allow seamless data exchange and integration. Data access to the MetBase content will be provided both through Astromat and via a front-end that is part of the recently initiated ›National Data Infrastructure Initiative‹ (NFDI), covering all scientific areas [4].

References: [1] http://www.metbase.org. [2] Stall et al. 2019. Make scientific data FAIR. Nature 570(7759): 27-29. [3] https://www.astromat.org [4] https://www.dfg.de/en/research_funding/programmes/nfdi/index.htm [5] https://www.nfdi4earth.de

How to cite: Hezel, D. C. and Lehnert, K. A.: Closing the gap between related databases: MetBase and the Astromaterials Data System (Astromat) plan for a common future, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13457, https://doi.org/10.5194/egusphere-egu22-13457, 2022.

Conversion of dynamic bottom-hole temperatures (BHTs) into static ones and utilizing on the purpose of either calibration for basin modelling or drilling plan is a crucial step for hydrocarbon and geothermal exploration projects. However, records of temperature conversions might be ignored or might get lost from the archives due to various reasons, such as project team change, diversion of focus into other areas or simply deletion of data. Disappearance of previous studies does not only disrupt the geoscientific knowledge but also causes repetition for exploration geoscientists to start the time consuming BHT conversion process all over again.

NE Mediterranean Dashboard v1.0 provides a solution for the issue by benefiting from data science instruments of Python programming language. By implementing Plotly-Dash for the front-end, and PostgreSQL for the back end as the keeper of thermal records in datatables, this open-source project proposes a user-friendly web application displaying temperature, geothermal gradient and heat flow profiles in a dashboard style.

The application is consisted of three tabs. The Overview tab provides statistical information while 2D plots section allows users to interact with cross-plots demonstrating thermal conditions for all wells or a particular well selected by the user. It also compares the results of three different BHT conversion methods known as; Horner-plot method, AAPG correction and Harrison et al. (1983). The last tab, Map View, illustrates the temperature, geothermal gradient, and heat flow maps for every 500 meters from surface to 4.5 km depth. The maps reveal the effects of the regional tectonics and how it controls the subsurface thermal behaviour along the Cilicia and Latakia Basins dominating the NE Mediterranean region.

All maps and cross-plots are interactive, and their styles can be changed according to the user’s preferences. They can also be downloaded as images for possible use in scientific publishment and/or presentations. The same interface and visualisation style, accessed by username and password, can also provide consistency between all project workers.

The source code is available at Github repository with the link; https://github.com/Ayberk-Uyanik/NE-Mediterranean-Thermal-Conditions and can efficiently be implemented for exploration projects in other regions.

How to cite: Uyanik, A.: An open-source web application displaying present-day subsurface thermal conditions of the NE Mediterranean region, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-810, https://doi.org/10.5194/egusphere-egu22-810, 2022.

The number of publications in the field of Hydrology (and in other geoscience fields) is rising at an almost exponential rate. In 2021 alone, more than 25 000 articles were listed in Web of Science on the topic of Water Resources. There is a tremendous wealth of knowledge and data hidden in these articles, which capture our experience in studying places, datasets or models. Hidden, because we currently do not possess (or at least, do not use) the necessary tools to access this knowledge resource in an effective manner. It is increasingly difficult for an individual researcher to build on existing knowledge. New ways to approach this problem are urgently needed.  

One approach to address this problem of literature explosion might be to extend article metadata to include geoscience-specific information that can facilitate knowledge search, accumulation and synthesis in a domain specific manner. Imagine one could easily find all studies performed in a specific location/ climate/ land use thus allowing a full picture of the hydrology of that region/ climate/ land use. It is important for any geoscience, a field strongly depending on experience, that knowledge is not “forgotten” in a mountain of publications but can easily be integrated into larger understanding.

So what meta-information would be most useful in knowledge synthesis? Study location? Spatial and/or temporal scale? Models used? Here, we would like to (re-)start the discussion on geoscience-relevant metadata enrichment. With the recent advancement in text mining scholarly literature, it is critical to have this discussion now or fall behind.

The Geosciences strongly depend on experiences we gain, which we largely share through the articles we publish. Knowledge accumulation in our science is hindered if this exchange of knowledge becomes ineffective. We are afraid it already has!

How to cite: Stein, L. and Wagener, T.: Knowledge hidden in plain sight – Extending article metadata to support meta-analysis and knowledge accumulation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3590, https://doi.org/10.5194/egusphere-egu22-3590, 2022.

EGU22-7330 | Presentations | ESSI3.3

ENES Data Space: an open, cloud-enabled data science environment for climate analysis 

Fabrizio Antonio, Donatello Elia, Andrea Giannotta, Alessandra Nuzzo, Guillaume Levavasseur, Atef Ben Nasser, Paola Nassisi, Alessandro D'Anca, Sandro Fiore, Sylvie Joussaume, and Giovanni Aloisio

The scientific discovery process has been deeply influenced by the data deluge started at the beginning of this century. This has caused a profound transformation in several scientific domains which are now moving towards much more collaborative processes. 

In the climate sciences domain, the ENES Data Space aims to provide an open, scalable, cloud-enabled data science environment for climate data analysis. It represents a collaborative research environment, deployed on top of the EGI federated cloud infrastructure, specifically designed to address the needs of the ENES community. The service, developed in the context of the EGI-ACE project, provides ready-to-use compute resources and datasets, as well as a rich ecosystem of open source Python modules and community-based tools (e.g., CDO, Ophidia, Xarray, Cartopy, etc.), all made available through the user-friendly Jupyter interface. 

In particular, the ENES Data Space provides access to a multi-terabyte set of specific variable-centric collections from large community experiments to support researchers in climate model data analysis experiments. The data pool of the ENES Data Space consists of a mirrored subset of CMIP datasets from the ESGF federated data archive collected by using the Synda community tool in order to provide the most up to date datasets into a single location. Results and output products as well as experiment definitions (in the form of Jupyter Notebooks) can be easily shared among users through data sharing services, which are also being integrated in the infrastructure, such as EGI DataHub.

The service was opened in the second part of 2021 and is now accessible in the European Open Science Cloud (EOSC) through the EOSC Portal Marketplace (https://marketplace.eosc-portal.eu/services/enes-data-space). This contribution will present an overview of the ENES Data Space service and its main features.

How to cite: Antonio, F., Elia, D., Giannotta, A., Nuzzo, A., Levavasseur, G., Ben Nasser, A., Nassisi, P., D'Anca, A., Fiore, S., Joussaume, S., and Aloisio, G.: ENES Data Space: an open, cloud-enabled data science environment for climate analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7330, https://doi.org/10.5194/egusphere-egu22-7330, 2022.

EGU22-8914 | Presentations | ESSI3.3

ViCTool: An open-source tool for vegetation indices computation of aerial raster images using python GDAL 

Jenniffer Carolina Triana-Martinez, Jose A. Fernandez-Gallego, Oscar Barrero, Irene Borra-Serrano, Tom De Swaef, Peter Lootens, and Isabel Roldan-ruiz

For precision agriculture (PA) applications that use aerial platforms, researchers are likely to be interested in extracting, study and understanding biophysical and structural properties in a spatio-temporal manner by using remotely sensed imagery to infer variations of vegetation biomass and/or plant vigor, irrigation strategies, nutrient use efficiency, stress, disease identification, among others. This requires measuring spectral responses of the crop at specific wavelengths by using, for instance, Vegetation Indices (VI). However, for the analysis of this spectral response and its heterogeneity and spatial variability, a large amount of aerial imagery (data) must be collected and processed using a photogrammetry software. Data extraction is often performed in a Geographic Information System (GIS) software and then analyzed using (in general) statistical software. On the one hand, a GIS is used for the collection of resources to manipulate, analyze, and display all forms of geographically referenced information. In this regard, Quantum GIS (QGIS) is one of the most well-known open-source software used which provides an integration of geoprocessing tools from a variety of different software libraries. QGIS is widely used to obtain VI computations through the raster calculator, although, this computation is performed with band rasters manually provided by the user; one by one, which is time-consuming. On the other hand, QGIS provides a Python interface to efficiently exploit the capabilities of a GIS to create similar plugins, but this can be a non-trivial task. In this work, we developed a specific and QGIS independent semi-automatic tool called ViCTool (Vegetation index Computation Tool) as a free open-source software (FOSS) for large amount of data extraction to derive VIs from aerial raster images in a certain region of interest. This tool has the option of extracting several multispectral and RGB VIs employing Blue, Green, Red, NIR, LWIR, or Red edge bands. The user must provide the input folder path containing one or more raster band folders, the shapefile with the regions of Interests, an output path to store the output VI rasters, and the file containing the VI computations. ViCTool was developed using Python PyQT for designing the User Interface (UI) and Python GDAL for raster processing to simplify and speed up the process of calculating a large amount of data intuitively.

How to cite: Triana-Martinez, J. C., Fernandez-Gallego, J. A., Barrero, O., Borra-Serrano, I., De Swaef, T., Lootens, P., and Roldan-ruiz, I.: ViCTool: An open-source tool for vegetation indices computation of aerial raster images using python GDAL, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8914, https://doi.org/10.5194/egusphere-egu22-8914, 2022.

EGU22-9101 | Presentations | ESSI3.3

openEO Platform: Enabling analysis of large-scale Earth Observation data repositories with federated computational infrastructure 

Benjamin Schumacher, Patrick Griffiths, Edzer Pebesma, Jeroen Dries, Alexander Jacob, Daniel Thiex, Matthias Mohr, and Christian Briese

The growing data stream from Earth Observation (EO) satellites has advanced scientific knowledge about the environmental status of planet earth and has enabled detailed environmental monitoring services. The openEO API developed in the Horizon 2020 project openEO (2017–2020, see https://openeo.org/) demonstrated that large-scale EO data processing needs can be expressed as a common set of analytic operators which are implemented in many GIS software or image analysis software products. The openEO Platform service implements the API into an operational, federated service currently running at back-ends at EODC and VITO with access to SentinelHub data to meet processing needs of a wide user community.

openEO Platform (https://openeo.cloud/) enables users to access a large collection of open EO data and perform scientific computations with intuitive client libraries simplifying underlying complexity. The platform is currently under construction with a strong focus on user co-creation and input from various disciplines incorporating a range of use-cases and a free-of-charge Early Adopter program that allows users to test the platform and to directly communicate with its developers. The use cases include CARD4L compliant ARD data creation with user defined parameterisation, forest dynamics mapping including time series fitting and prediction functionalities, crop type mapping including EO feature engineering supporting machine learning based crop mapping and forest canopy mapping supporting regression based fraction cover mapping.

The interaction with the platform includes multiple programming interfaces (R, Python, JavaScript) and a browser-based management console and model builder which allows a direct, interactive display and modification of processing workflows. The resulting processing graph is then forwarded via the openEO API to the federated back-ends.

In the future users will be able to process continental-scale EO data and create ready-to-use environmental monitoring services with analysis-ready data (ARD) and predefined available processes. This presentation will provide an overview of the current capabilities and the evolution roadmap of openEO Platform. It will demonstrate the utility of the platform to process large amounts of EO data into meaningful information products, supporting environmental monitoring, scientific research and political decision-makers.

How to cite: Schumacher, B., Griffiths, P., Pebesma, E., Dries, J., Jacob, A., Thiex, D., Mohr, M., and Briese, C.: openEO Platform: Enabling analysis of large-scale Earth Observation data repositories with federated computational infrastructure, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9101, https://doi.org/10.5194/egusphere-egu22-9101, 2022.

EGU22-10032 | Presentations | ESSI3.3

Technical-semantic interoperability reference 

Piotr Zaborowski, Rob Atkinson, Nils Hempelmann, and Marie-Francoise Voidrot

The FAIR data principles form the core OGC mission that renders in the open geospatial standards and the open-data initiatives that use them. Although OGC is best known for the technical interoperability, the domain modelling and semantic level play an inevitable role in the standards definition and the exploitation. On the one hand, we have a growing number of specialised profiles and implementations that selectively use the OGC modular specification model components. On the other hand, various domain ontologies exist already, enabling a better understanding of the data. As there could be multiple semantic representations, common data models support cross ontology traverses. Defining the service in the technical-semantic space requires fixing some flexibility points, including optional and mandatory elements, additional constraints and rules, and content including normalised vocabularies to be used.

The proposed solution of the OGC Definition Server is a multi-purpose application built around the triple store database engine integrated with the ingestion, validation, and entailment tools and exposing customized end-points. The models are available in the human-readable format and machine-2-machine aimed encodings. For manual processes, it enables understanding the technical and semantic definitions/relationships between entities. Programmatic solutions benefit from a precise referential system, validations, and entailment.

Currently, OGC Definition Server is hosting several types of definitions covering:

  • Register of OGC bodies, assets, and its modules
  • Ontological common semantic models (e.g., for Agriculture)
  • Dictionaries of subject domains (e.g., PipelineML Codelists)

In practice, that is a step forward in defining the bridge between conceptual and logical models. The concepts can be expressed as instances of various ontological classes and interpreted within multiple contexts, with the definition translated into entities, relationships, and properties. In the future, it is linking the data to the reference model and external ontologies that may be even more significant. Doing so can greatly improve the quality of the knowledge produced based on the collected data. Ability to verify the research outcomes and explainable AI are just two examples where a precise log of inferences and unambiguous semantic compatibility of the data will play a key role.

How to cite: Zaborowski, P., Atkinson, R., Hempelmann, N., and Voidrot, M.-F.: Technical-semantic interoperability reference, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10032, https://doi.org/10.5194/egusphere-egu22-10032, 2022.

EGU22-10213 | Presentations | ESSI3.3

Developing semantic interoperability in ecosystem studies: semantic modelling and annotation for FAIR data production 

Christian Pichot, Nicolas Beudez, Cécile Callou, André Chanzy, Alyssa Clavreul, Philippe Clastre, Benjamin Jaillet, François Lafolie, Jean-François Le Galliard, Chloé Martin, Florent Massol, Damien Maurice, Nicolas Moitrier, Ghislaine Monet, Hélène Raynal, Antoine Schellenberger, and Rachid Yahiaoui

The study of ecosystem characteristics and functioning requires multidisciplinary approaches and mobilises multiple research teams. Data are collected or computed in large quantity but are most often poorly standardised and therefore heterogeneous. In this context the development of semantic interoperability is a major challenge for the sharing and reuse of these data. This objective is implemented within the framework of the AnaEE (Analysis and Experimentation on Ecosystems) Research Infrastructure dedicated to experimentation on ecosystems and biodiversity. A distributed Information System (IS) is developed, based on the semantic interoperability of its components using common vocabularies (AnaeeThes thesaurus and OBOE-based ontology extended for disciplinary needs) for modelling observations and their experimental context. The modelling covers the measured variables, the different components of the experimental context, from sensor and plot to network. It consists in the atomic decomposition of the observations, identifying the observed entities, their characteristics and qualification, naming standards and measurement units. This modelling allows the semantic annotation of relational databases and flat files for the production of graph databases. A first pipeline is developed for the automation of the annotation process and the production of the semantic data, annotation that may represent a huge conceptual and practical work without such automation. A second pipeline is devoted to the exploitation of these semantic data through the generation i) of standardized GeoDCAT and ISO metadata records and ii) of data files (NetCDF format) from selected perimeters (experimental sites, years, experimental factors, measured variables...). Carried out on all the data generated by the experimental platforms, this practice will produce semantically interoperable data that meets the linked opendata standards. The work carried out contributes to the development and use of semantic vocabularies within the ecology research community. The genericity of the tools make them usable in different contexts of ontologies and databases.

How to cite: Pichot, C., Beudez, N., Callou, C., Chanzy, A., Clavreul, A., Clastre, P., Jaillet, B., Lafolie, F., Le Galliard, J.-F., Martin, C., Massol, F., Maurice, D., Moitrier, N., Monet, G., Raynal, H., Schellenberger, A., and Yahiaoui, R.: Developing semantic interoperability in ecosystem studies: semantic modelling and annotation for FAIR data production, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10213, https://doi.org/10.5194/egusphere-egu22-10213, 2022.

An open-source framework is presented to support geoscientific investigations of flow, conduction, and wave propagation. The Analytic Element Method (AEM) provides nearly exact solutions to complicated boundary and interface problems, typically with 6-8 significant digits. Examples are presented for seepage of water through soil and aquifers including fractured flow, groundwater/surface water interactions through stream beds, and ecological interactions of plant water uptake. Related applications include waves near coastal features and propagation of tsunamis through bathymetric shoals. This presentation overviews the concise AEM representation from Steward (2020), "Analytic Element Method: Complex Interactions of Boundaries and Interfaces", where solutions discretize the domain into features, develop mathematical representations of interactions, and develop coupled systems of equations to solve boundary conditions.  The companion site at Oxford University Press contains a wide range of open-source solutions to these problems and related applications across the geosciences.

How to cite: Steward, D. R.: An open-source framework for nearly exact solutions to complex geoscience interactions (AEM), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10575, https://doi.org/10.5194/egusphere-egu22-10575, 2022.

Unidata has developed and deployed data infrastructure and data-proximate scientific workflows and software tools using cloud computing technologies for accessing, analyzing, and visualizing geoscience data. These resources are provided to educators and researchers through the Unidata Science Gateway (https://science-gateway.unidata.ucar.edu) and deployed on the U. S. National Science Foundation funded Jetstream (https://jetstream-cloud.org) cloud facility. During the SARS-CoV-2/COVID-19 pandemic, the Unidata Science Gateway has been used by many universities to teach data-centric atmospheric science courses and conduct several software training workshops to advance skills in data science.

The COVID-19 pandemic led to the closure of university campuses with little advance notice. Educators at institutions of higher learning had to urgently transition from in-person teaching to online classrooms. While such a sudden change was disruptive for education, it also presented an opportunity to experiment with instructional technologies that have been emerging for the last few years. Web-based computational notebooks, with their mixture of explanatory text, equations, diagrams and interactive code are an effective tool for online learning. Their use is prevalent in many disciplines including the geosciences. Multi-user computational notebook servers (e.g., Jupyter Notebooks) enable specialists to deploy pre-configured scientific computing environments for the benefit of students. The use such tools and environments removes barriers for students who otherwise have to download and install complex software tools that can be time consuming to configure, simplifying workflows and reducing time to analysis and interpretation. It also provides a consistent computing environment for all students and democratizes access to resources. These servers can be provisioned with computational resources not found in a desktop computing setting and leverage cloud computing environments and high speed networks. They can be accessed from any web browser-enabled device like laptops and tablets.

Since spring 2020 when the Covid pandemic led to the closure of universities across the U. S., Unidata has assisted several earth science departments with computational notebook environments for their classes. We worked with educators to tailor these resources for their teaching objectives. We ensured the technology was correctly provisioned with appropriate computational resources and collaborated to have teaching material immediately available for students. There were many successful examples of online learning experiences.

In this paper, we describe the details of the Unidata Science Gateway resources and discuss how those resources enabled Unidata to support universities during the COVID-19 lockdown.

How to cite: Ramamurthy, M. and Chastang, J.: The use of the Unidata Science Gateway as a cyberinfrastructure resource to facilitate education and research during COVID-19, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10615, https://doi.org/10.5194/egusphere-egu22-10615, 2022.

EGU22-10719 | Presentations | ESSI3.3

Virtual Labs for Collaborative Environmental Data Science 

Maria Salama, Gordon Blair, Mike Brown, and Michael Hollaway

Research in environmental data science is typically transdisciplinary in nature, with scientists, practitioners, and stakeholders creating data-driven solutions to the environment’s grand challenges, often using a large amount of highly heterogeneous data along with complex analytical methods. The concept of virtual labs allow collaborating scientists to explore big data, develop and share new methods, as well as communicate their results to stakeholders, practitioners, and decision-makers across different scales (individual, local, regional, or national).

Within the Data Science of the Natural Environment (DSNE) project, a transdisciplinary team of environmental scientists, statisticians, computer scientists and social scientists are collaborating to develop statistical/data science algorithms for environmental grand challenges through the medium of a virtual labs platform, named DataLabs. DataLabs, in continuous development by UKCEH in an agile approach, is a consistent and coherent cloud-based research environment that advocates open and collaborative science by providing the infrastructure and software tools to bring users of different areas of expertise (scientists, stakeholders, policy-makers, and the public) interested in environmental science into one virtual space to tackle environmental problems. DataLabs support end-to-end analysis from the assimilation and analysis of data through to the visualisation, interpretation, and discussion of the results.

DataLabs draw on existing technologies to provide a range of functionality and modern tools to support research collaboration, including: (i) parallel data cluster services, such as DASK and Spark; (ii) executable notebook technologies, such as Jupyter, Zepplin and R; (iii) lightweight applications such as RShiny to allow rapid collaboration among diverse research teams; and (iv) containerisation of application deployment (e.g. using Docker) so that technologies developed can be more easily moved to other cloud platforms as required. Following the principles of service-oriented architectures, the design enables selecting the most appropriate technology for each component and exposing any functions by other systems via HTTP as services. Within each component, a modular-layered architecture is used to ensure separation of concerns and separated presentation. DataLabs are using JASMIN as the host computing platform, giving researchers seamless access to HPC resources, while taking advantage of the cloud scalability. Data storage is available to all systems through shared block storage (NFS cluster) and object storage (QuoBye S3).

Research into and development of virtual labs for environmental data science are taking part within the DSNE project. This requires studying the current experiences, barriers and opportunities associated with virtual labs, as well as the requirements for future developments and extensions. For this purpose, we have conducted an online user engagement survey, targeting DSNE researchers and the wider user community, as well as the international research groups and organisations that contribute to virtual labs design. The survey results are considered are feeding into the continuous development of DataLabs. For instance, some of the researchers’ requirements include the ability to submit their own containers to DataLabs and the security issues to access external data storage. Other users have indicated the importance of having libraries of data science and data visualisation methods, which are currently being populated by DSNE researchers to be then explored in different environmental problems. 

How to cite: Salama, M., Blair, G., Brown, M., and Hollaway, M.: Virtual Labs for Collaborative Environmental Data Science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10719, https://doi.org/10.5194/egusphere-egu22-10719, 2022.

EGU22-10796 | Presentations | ESSI3.3

UAV Data Analysis in the Cloud - A Case Study 

Ulrich Kelka, Chris Peters, Owen Kaluza, Jens Klump, Steven Micklethwaite, and Nathan Reid

The mapping of fracture networks from aerial photographs, tracing of fault scarps in digital elevation models, and digitisation of boundaries from potential field data is fundamental to many geological applications (e.g. resource management, natural hazard assessment, geotechnical stability etc.). However, conventional approaches to digitising geological features are labour intensive and do not scale.

We describe how we designed an automated fracture detection workflow and implemented it in a cloud environment, using free and open-source software, as part of The Australian Scalable Drone Cloud (ASDC, https://asdc.io) national initiative. The ASDC aims to standardise and scale drone data, then analyse and translate it for users in academia, government, and industry.

In this use case, we applied automatic ridge/edge detection techniques to generate trace maps of discontinuities (e.g. fractures or lineaments). The approach allows for internal classification based on statistical description and/or geometry and enhances the understanding of the internal structure of such networks. Further, photogrammetry and image analysis at scale can be limited by the available computing resources, but this issue was overcome through implementation in the cloud. The simple methods l serve as a basis for emerging techniques that utilise machine learning to fully automate the discontinuity identification and represents an important step in the cultural adoption of such tools in the Earth Science community.

We deployed Open Drone Map (ODM) onto a cloud infrastructure to produce orthophoto mosaics from aerial images taken by UAV to implement this case study. We ported a fracture detection and mapping algorithm from Matlab to Python for the image analysis. The image analysis workflow is orchestrated through a Jupyter Notebook on a Jupyter Hub. The resulting prototype workflow will be used to better scope the services needed to manage the ASDC platform, like user management and data logistics.

How to cite: Kelka, U., Peters, C., Kaluza, O., Klump, J., Micklethwaite, S., and Reid, N.: UAV Data Analysis in the Cloud - A Case Study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10796, https://doi.org/10.5194/egusphere-egu22-10796, 2022.

EGU22-10850 | Presentations | ESSI3.3

A Natural Language Processing-based Metadata Recommendation Tool for Earth Science Data 

Armin Mehrabian, Irina Gerasimov, and Mohammad Khayat

As one of NASA's Science Mission Directorate data centers, the Goddard Earth Sciences Data and Information Services Center (GES-DISC) provides Earth science data, information, and services to the public. One of the objectives of our mission is to facilitate data discovery for users and systems that utilize our data. Metadata plays a very important role in data discovery. As a result, if a dataset is to be used efficiently, it needs to be enhanced with rich and comprehensive metadata. For example, most search engines rely on matching the search query with the indexed metadata in order to find relevant results. Here we present a tool that supports data custodians in the process of creating metadata by utilizing natural language processing (NLP).

 

Our approach involves combining several text corpora and training a semantic embedding. An embedding is a numerical representation of linguistic features that is aware of the semantics and context. The text corpora we use to train our embedding model contains publication abstracts, our data collections metadata, and ontologies. Our recommendations are based on keywords selected from the Global Change Master Directory (GCMD) and a collection of ontologies including SWEET and ENVO. GCMD offers a comprehensive collection of Earth Science vocabulary terms. This data lexicon enables data curators to easily search metadata and retrieve the data, services, and variables associated with each term. When a query is matched against various keywords in the GCMD branch, the probability of the query matching these keywords is calculated. A similarity score is then assigned to each of the branches of the GCMD, and each branch is sorted according to this similarity metric. In addition to unsupervised training, our approach has the advantage of being able to search for keyword recommendations of different sizes, ranging from sub-words to sentences and longer texts.

How to cite: Mehrabian, A., Gerasimov, I., and Khayat, M.: A Natural Language Processing-based Metadata Recommendation Tool for Earth Science Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10850, https://doi.org/10.5194/egusphere-egu22-10850, 2022.

EGU22-10855 | Presentations | ESSI3.3

Cross-institutional collaboration through the prism of FOSS and Cloud technologies 

Pavel Golodoniuc, Vincent Fazio, Samuel Bradley, YunLong Li, and Jens Klump

The AuScope Virtual Research Environment (AVRE) program’s Engage activity was devised as a vehicle to promote low-barrier collaboration projects with Australian universities and publicly-funded research agencies and to provide an avenue for exploring new applications and technologies that could become part of the broader AuScope AVRE portfolio. In its second year, we developed two projects with another cohort of collaborative projects proponents from two Australian research institutions. Both projects have leveraged and extended upon previously developed open-source projects while tailoring them to clients’ specific needs.

The latest projects developed under the AuScope AVRE Engage program were the AuScope Geochemistry Network (AGN) Lab Finder Application and the Magnetic Component Symmetry (MCS) Analysis application. The Lab Finder application fits within a broader ecosystem of AGN projects and is an online tool that provides an overview of participating laboratories, their equipment, techniques, contact information with a catalogue that sums up the possibilities of each analytical technique, and a user-friendly search and browsing interface. The MCS Analysis application implements the CSIRO Orthogonal Magnetic Component (OMC) analysis method for the detection of l variations in the magnetic field (i.e., anomalies) that are the result of subsurface magnetizations. Both applications were developed using free and open-source software (FOSS) and leveraged prior work and further expand on it. The AGN Lab Finder is an adaptation of the Technique Finder originally developed by Intersect for Microscopy Australia, which was redesigned to accommodate geochemistry-specific equipment and describe its analytical capabilities It provides an indexing mechanism and a search functionality allowing researchers to efficiently locate and identify laboratories with the equipment necessary to their research needs and that satisfies their analytical capability requirements. The MCS Analysis application is a derivative product based on Geophysical Processing Toolkit (GPT) that implements a user-centred approach to visual data analytics and modelling. It significantly improves user experience by integrating with open data services, adding complex interactivity and data visualisation functionality, and improving overall exploratory data analysis capability.

The Engage approach to running collaborative projects has proved successful over the last two years and produced low-maintenance tools that are made freely accessible to researchers. The approach to engage a wider audience and improve the speed of science delivery has influenced other projects within the CSIRO Mineral Resources business unit to implement similar programs.

This case study will demonstrate the social aspects of our experience in cross-institutional collaboration, showcase our learnings during the development of pilot projects, and outline our vision for future work.

How to cite: Golodoniuc, P., Fazio, V., Bradley, S., Li, Y., and Klump, J.: Cross-institutional collaboration through the prism of FOSS and Cloud technologies, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10855, https://doi.org/10.5194/egusphere-egu22-10855, 2022.

EGU22-11012 | Presentations | ESSI3.3

The Known Knowns, the Known Unknowns and the Unknown Unknowns of Geophysics Data Processing in 2030 

Lesley Wyborn, Nigel Rees, Jens Klump, Ben Evans, Tim Rawling, and Kelsey Druken

The Australian 2030 Geophysics Collections Project seeks to make accessible online a selection of rawer, high-resolution versions of geophysics datasets that comply with the FAIR and CARE principles, and ensure they are suitable for programmatic access in HPC environments by future 2030 next-generation scalable, data-intensive computation (including AI and ML). The 2030 project is not about building systems for the infrastructures and stakeholder requirements of today, rather it is about positioning geophysical data collections to be capable of taking advantage of next generation technologies and computational infrastructures by 2030.

There are already many known knowns of 2030 computing: high end computational power will be at exascale and today’s emerging collaborative platforms will continue to evolve as a mix of HPC and cloud. Data volumes will be measured in Zettabytes (1021 bytes), which is about 10 times more than today. It will be mandatory for data access to be fully machine-to-machine as envisaged by the FAIR principles in 2016. Whereas we currently discuss Big Data Vs (volume, variety, value, velocity, veracity, etc), by 2030 the focus will be on Big Data Cs (community, capacity, confidence, consistency, clarity, crumbs, etc).

So often today’s research is undertaken on pre-canned, analysis-ready datasets (ARD) that are tuned towards the highest common denominator as determined by the data owner. However, increased computational power colocated with fast-access storage systems will mean that geophysicists will be able to work on less processed data levels and then transparently develop their own derivative products that are more tuned to the parameters of their particular use case. By 2030, as research teams analyse larger volumes of high-resolution data they will be able to see the quality of their algorithms quickly and there will be multiple versions of open software being used as researchers fine tune individual algorithms to suit their specific requirements. We will be capable of more precise solutions and in hazards space and other relevant areas, analytics will be done in faster-than-real-time. 

The known unknowns emerging are how we will preserve and make transparent any result from this diversity and flexibility with regards to the exact software used, the precise version of the data accessed, and the platforms utilised, etc. When we obtain a scientific ‘product’, how will we vouch for its fidelity and ensure it can be consistently replicated to establish trust? How do we preserve who funded what so that sponsors can see which investments have had the greatest impact and uptake? 

To have any confidence in any data product, we will need to have transparency throughout the whole scientific process. We need to start working now on more automated systems that capture provenance through successive levels of processing, including how it was produced and which dataset/dataset extract was used. But how do we do this in a scaleable, machine readable way?

And then there will be the unknown unknowns of 2030 computing. Time will progressively expose these to us in the next decade as the scale and speed at which collaborative research is undertaken increases.

 

How to cite: Wyborn, L., Rees, N., Klump, J., Evans, B., Rawling, T., and Druken, K.: The Known Knowns, the Known Unknowns and the Unknown Unknowns of Geophysics Data Processing in 2030, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11012, https://doi.org/10.5194/egusphere-egu22-11012, 2022.

EGU22-11270 | Presentations | ESSI3.3

Freva, a software framework for the Earth System community. Overview and and new features. 

Etor E. Lucio-Eceiza, Christopher Kadow, Martin Bergemann, Mahesh Ramadoss, Brian Lewis, Andrej Fast, Jens Grieger, Andy Richling, Ingo Kirchner, Uwe Ulbrich, Hannes Thiemann, and Thomas Ludwig

The complexity of the climate system calls for a combined approach of different knowledge areas. For that, increasingly larger projects need a coordinate effort that fosters an active collaboration between members. On the other hand, although the continuous improvement of numerical models and larger observational data availability provides researchers with a growing amount of data to analyze, the need for greater resources to host, access, and evaluate them efficiently through High Performance Computing (HPC) infrastructures is growing more than ever. Finally, the thriving emphasis on FAIR data principles [1] and the easy reproducibility of evaluation workflows also requires a framework that facilitates these tasks. Freva (Free Evaluation System Framework [2, 3]) is an efficient solution to handle customizable evaluation systems of large research projects, institutes or universities in the Earth system community [4-6] over the HPC environment and in a centralized manner.

 

Freva is a scientific software infrastructure for standardized data and analysis tools (plugins) that provides all its available features both in a shell and web environment. Written in python, is equipped with a standardized model database, an application-programming interface (API) and a history of evaluations, among others:

  • An implemented metadata system in SOLR with its own search tool allows scientists and their plugins to retrieve the required information from a centralized database. The databrowser interface satisfies the international standards provided by the Earth System Grid Federation (ESGF, e.g. [7]).
  • An API allows scientific developers to connect their plugins with the evaluation system independently of the programming language. The connected plugins are able to access from and integrate their results back to the database, allowing for a concatenation of plugins as well. This ecosystem increases the number of scientists involved in the studies, boosting the interchange of results and ideas. It also fosters an active collaboration between plugin developers.
  • The history and configuration sub-system stores every analysis performed with Freva in a MySQL database. Analysis configurations and results can be searched and shared among the scientists, offering transparency and reproducibility, and saving CPU hours, I/O, disk space and time.

Freva efficiently frames the interaction between different technologies thus improving the Earth system modeling science.

 

This framework has undergone major refactoring and restructuring of the core that will also be discussed. Among others:

  • Major core Python update (2.7 to 3.9).
  • Easier deployment and containerization of the framework via Docker.
  • More secure system configuration via Vault integration.
  • Direct Freva function calls via python client (e.g. for jupyter notebooks).
  • Improvements in the dataset incorporation.

 

References:

[1] https://www.go-fair.org/fair-principles/

[2] Kadow, C. et al. , 2021. Introduction to Freva – A Free Evaluation System Framework for Earth System Modeling. JORS. http://doi.org/10.5334/jors.253

[3] gitlab.dkrz.de/freva

[4] freva.met.fu-berlin.de

[5] https://www.xces.dkrz.de/

[6] www-regiklim.dkrz.de

[7] https://esgf-data.dkrz.de/projects/esgf-dkrz/

How to cite: Lucio-Eceiza, E. E., Kadow, C., Bergemann, M., Ramadoss, M., Lewis, B., Fast, A., Grieger, J., Richling, A., Kirchner, I., Ulbrich, U., Thiemann, H., and Ludwig, T.: Freva, a software framework for the Earth System community. Overview and and new features., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11270, https://doi.org/10.5194/egusphere-egu22-11270, 2022.

EGU22-12617 | Presentations | ESSI3.3

Digital Twin of the Ocean - An Introduction to the ILIAD project 

Bente Lilja Bye, Georgios Sylaios, Arne-Jørgen Berre, Simon Van Dam, and Vivian Kiousi

The ILIAD Digital Twin of the Ocean, a H2020 funded project, builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy and aims at establishing an interoperable, data-intensive, and cost-effective Digital Twin of the Ocean. It capitalizes on the explosion of new data provided by many different Earth observation sources, advanced computing infrastructures (cloud computing, HPC, Internet of Things, Big Data, social networking, and more) in an inclusive, virtual/augmented, and engaging fashion to address all Earth data challenges. It will contribute towards a sustainable ocean economy as defined by the Centre for the Fourth Industrial Revolution and the Ocean, a hub for global, multistakeholder co-operation.
The ILIAD Digital Twin of the Ocean will fuse a large volume of diverse data, in a semantically rich and data agnostic approach to enable simultaneous communication with real world systems and models. Ontologies and a standard style-layered descriptor will facilitate semantic information and intuitive discovery of underlying information and knowledge to provide a seamless experience. The combination of geovisualisation, immersive visualization and virtual or augmented reality allows users to explore, synthesize, present, and analyze the underlying geospatial data in an interactive manner. To realize its potential, the ILIAD Digital Twin of the Ocean will follow the System of Systems approach, integrating the plethora of existing EU Earth Observing and Modelling Digital Infrastructures and Facilities. To promote additional applications through ILIAD Digital Twin of the Ocean, the partners will create the ILIAD Marketplace, included a market for Geosciences related applications and services. Like an app store, providers will use the ILIAD Marketplace to distribute apps, plug-ins, interfaces, raw data, citizen science data, synthesized information, and value-adding services derived from the ILIAD Digital Twin of the Ocean. It will also be an efficient way for scientists to discover and find relevant applications and services.

How to cite: Bye, B. L., Sylaios, G., Berre, A.-J., Van Dam, S., and Kiousi, V.: Digital Twin of the Ocean - An Introduction to the ILIAD project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12617, https://doi.org/10.5194/egusphere-egu22-12617, 2022.

EGU22-2024 | Presentations | ITS3.1/SSS1.2 | Highlight

Understanding natural hazards in a changing landscape: A citizen science approach in Kigezi highlands, southwestern Uganda 

Violet Kanyiginya, Ronald Twongyirwe, Grace Kagoro, David Mubiru, Matthieu Kervyn, and Olivier Dewitte

The Kigezi highlands, southwestern Uganda, is a mountainous tropical region with a high population density, intense rainfall, alternating wet and dry seasons and high weathering rates. As a result, the region is regularly affected by multiple natural hazards such as landslides, floods, heavy storms, and earthquakes. In addition, deforestation and land use changes are assumed to have an influence on the patterns of natural hazards and their impacts in the region. Landscape characteristics and dynamics controlling the occurrence and the spatio-temporal distribution of natural hazards in the region remain poorly understood. In this study, citizen science has been employed to document and understand the spatial and temporal occurrence of natural hazards that affect the Kigezi highlands in relation to the multi-decadal landscape change of the region. We present the methodological research framework involving three categories of participatory citizen scientists. First, a network of 15 geo-observers (i.e., citizens of local communities distributed across representative landscapes of the study area) was established in December 2019. The geo-observers were trained at using smartphones to collect information (processes and impacts) on eight different natural hazards occurring across their parishes. In a second phase, eight river watchers were selected at watershed level to monitor the stream flow characteristics. These watchers record stream water levels once daily and make flood observations. In both categories, validation and quality checks are done on the collected data for further analysis. Combining with high resolution rainfall monitoring using rain gauges installed in the watersheds, the data are expected to characterize catchment response to flash floods. Lastly, to reconstruct the historical landscape change and natural hazards occurrences in the region, 96 elderly citizens (>70 years of age) were engaged through interviews and focus group discussions to give an account of the evolution of their landscape over the past 60 years. We constructed a historical timeline for the region to complement the participatory mapping and in-depth interviews with the elderly citizens. During the first 24 months of the project, 240 natural hazard events with accurate timing information have been reported by the geo-observers. Conversion from natural tree species to exotic species, increased cultivation of hillslopes, road construction and abandonment of terraces and fallowing practices have accelerated natural hazards especially flash floods and landslides in the region. Complementing with the region’s historical photos of 1954 and satellite images, major landscape dynamics have been detected. The ongoing data collection involving detailed ground-based observations with citizens shows a promising trend in the generation of new knowledge about natural hazards in the region.

How to cite: Kanyiginya, V., Twongyirwe, R., Kagoro, G., Mubiru, D., Kervyn, M., and Dewitte, O.: Understanding natural hazards in a changing landscape: A citizen science approach in Kigezi highlands, southwestern Uganda, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2024, https://doi.org/10.5194/egusphere-egu22-2024, 2022.

EGU22-2929 | Presentations | ITS3.1/SSS1.2

Possible Contributions of Citizen Science in the Development of the Next Generation of City Climate Services 

Peter Dietrich, Uta Ködel, Sophia Schütze, Felix Schmidt, Fabian Schütze, Aletta Bonn, Thora Herrmann, and Claudia Schütze

Human life in cities is already affected by climate change. The effects will become even more pronounced in the coming years and decades. Next-generation of city climate services is necessary for adapting infrastructures and the management of services of cities to climate change. These services are based on advanced weather forecast models and the access to diverse data. It is essential to keep in mind that each citizen is a unique individual with their own peculiarities, preferences, and behaviors. The base for our approach is the individual specific exposure, which considers that people perceive the same conditions differently in terms of their well-being. Individual specific exposure can be defined as the sum of all environmental conditions that affect humans during a given period of time, in a specific location, and in a specific context. Thereby, measurable abiotic parameters such as temperature, humidity, wind speed, pollution and noise are used to characterize the environmental conditions. Additional information regarding green spaces, trees, parks, kinds of streets and buildings, as well as available infrastructures are included in the context. The recording and forecasting of environmental parameters while taking into account the context, as well as the presentation of this information in easy-to-understand and easy-to-use maps, are critical for influencing human behavior and implementing appropriate climate change adaptation measures.

We will adopt this approach within the frame of the recently started, EU-funded CityCLIM project. We aim to develop and implement approaches which will explore the potential of citizen science in terms of current and historical data collecting, data quality assessment and evaluation of data products.  In addition, our approach will also provide strategies for individual climate data use, and the derivation and evaluation of climate change adaptation actions in cities.

In a first step we need to define and to characterize the different potential stakeholder groups involved in citizen science data collection. Citizen science offers approaches that consider citizens as both  organized target groups (e.g., engaged companies, schools) and individual persons (e.g. hobby scientists). An important point to be investigated is the motivation of citizen science stakehoder groups to sustainably collect data and make it available to science and reward them accordingly. For that purpose, strategic tools, such as value proposition canvas analysis, will be applied to taylor the science-to-business and the science-to-customer communications and offers in terms of the individual needs.

How to cite: Dietrich, P., Ködel, U., Schütze, S., Schmidt, F., Schütze, F., Bonn, A., Herrmann, T., and Schütze, C.: Possible Contributions of Citizen Science in the Development of the Next Generation of City Climate Services, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2929, https://doi.org/10.5194/egusphere-egu22-2929, 2022.

EGU22-4168 | Presentations | ITS3.1/SSS1.2

Extending Rapid Image Classification with the Picture Pile Platform for Citizen Science 

Tobias Sturn, Linda See, Steffen Fritz, Santosh Karanam, and Ian McCallum

Picture Pile is a flexible web-based and mobile application for ingesting imagery from satellites, orthophotos, unmanned aerial vehicles and/or geotagged photographs for rapid classification by volunteers. Since 2014, there have been 16 different crowdsourcing campaigns run with Picture Pile, which has involved more than 4000 volunteers who have classified around 11.5 million images. Picture Pile is based on a simple mechanic in which users view an image and then answer a question, e.g., do you see oil palm, with a simple yes, no or maybe answer by swiping the image to the right, left or downwards, respectively. More recently, Picture Pile has been modified to classify data into categories (e.g., crop types) as well as continuous variables (e.g., degree of wealth) so that additional types of data can be collected.

The Picture Pile campaigns have covered a range of domains from classification of deforestation to building damage to different types of land cover, with crop type identification as the latest ongoing campaign through the Earth Challenge network. Hence, Picture Pile can be used for many different types of applications that need image classifications, e.g., as reference data for training remote sensing algorithms, validation of remotely sensed products or training data of computer vision algorithms. Picture Pile also has potential for monitoring some of the indicators of the United Nations Sustainable Development Goals (SDGs). The Picture Pile Platform is the next generation of the Picture Pile application, which will allow any user to create their own ‘piles’ of imagery and run their own campaigns using the system. In addition to providing an overview of Picture Pile, including some examples of relevance to SDG monitoring, this presentation will provide an overview of the current status of the Picture Pile Platform along with the data sharing model, the machine learning component and the vision for how the platform will function operationally to aid environmental monitoring.

How to cite: Sturn, T., See, L., Fritz, S., Karanam, S., and McCallum, I.: Extending Rapid Image Classification with the Picture Pile Platform for Citizen Science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4168, https://doi.org/10.5194/egusphere-egu22-4168, 2022.

EGU22-5094 | Presentations | ITS3.1/SSS1.2

Life in undies – Preliminary results of a citizen science data collection targeting soil health assessement in Hungary 

Mátyás Árvai, Péter László, Tünde Takáts, Zsófia Adrienn Kovács, Kata Takács, János Mészaros, and László Pásztor

Last year, the Institute for Soil Sciences, Centre for Agricultural Research launched Hungary's first citizen science project with the aim to obtain information on the biological activity of soils using a simple estimation procedure. With the help of social media, the reactions on the call for applications were received from nearly 2000 locations. 

In the Hungarian version of the international Soil your Undies programme, standardized cotton underwear was posted to the participants with a step-by-step tutorial, who buried their underwear for about 60 days, from mid of May until July in 2021, at a depth of about 20-25 cm. After the excavation, the participants took one digital image of the underwear and recorded the geographical coordinates, which were  uploaded to a GoogleForms interface together with several basic information related to the location and the user (type of cultivation, demographic data etc.).

By analysing digital photos of the excavated undies made by volunteers, we obtained information on the level to which cotton material had decomposed in certain areas and under different types of cultivation. Around 40% of the participants buried the underwear in garden, 21% in grassland, 15% in orchard, 12% in arable land, 5% in vineyard and 4% in forest (for 3% no landuse data was provided).

The images were first processed using Fococlipping and Photoroom softwares for background removing and then percentage of cotton material remaining was estimated based on the pixels by using R Studio ‘raster package’.

The countrywide collected biological activity data from nearly 1200 sites were statistically evaluated by spatially aggregating the data both for physiographical and administrative units. The results have been published on various platforms (Facebook, Instagram, specific web site etc.), and a feedback is also given directly to the volunteers.

According to the experiments the first citizen science programme proved to be successful. 

 

Acknowledgment: Our research was supported by the Hungarian National Research, Development and Innovation Office (NKFIH; K-131820)

Keywords: citizen science; soil life; soil health; biological activity; soil properties

How to cite: Árvai, M., László, P., Takáts, T., Kovács, Z. A., Takács, K., Mészaros, J., and Pásztor, L.: Life in undies – Preliminary results of a citizen science data collection targeting soil health assessement in Hungary, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5094, https://doi.org/10.5194/egusphere-egu22-5094, 2022.

EGU22-5147 | Presentations | ITS3.1/SSS1.2

Distributed databases for citizen science 

Julien Malard-Adam, Joel Harms, and Wietske Medema

Citizen science is often heavily dependent on software tools that allow members of the general population to collect, view and submit environmental data to a common database. While several such software platforms exist, these often require expert knowledge to set up and maintain, and server and data hosting costs can become quite costly in the long term, especially if a project is successful in attracting many users and data submissions. In the context of time-limited project funding, these limitations can pose serious obstacles to the long-term sustainability of citizen science projects as well as their ownership by the community.

One the other hand, distributed database systems (such as Qri and Constellation) dispense with the need for a centralised server and instead rely on the devices (smartphone or computer) of the users themselves to store and transmit community-generated data. This new approach leads to the counterintuitive result that distributed systems, contrarily to centralised ones, become more robust and offer better availability and response times as the size of the user pool grows. In addition, since data is stored by users’ own devices, distributed systems offer interesting potential for strengthening communities’ ownership over their own environmental data (data sovereignty). This presentation will discuss the potential of distributed database systems to address the current technological limitations of centralised systems for open data and citizen science-led data collection efforts and will give examples of use cases with currently available distributed database software platforms.

How to cite: Malard-Adam, J., Harms, J., and Medema, W.: Distributed databases for citizen science, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5147, https://doi.org/10.5194/egusphere-egu22-5147, 2022.

EGU22-5571 | Presentations | ITS3.1/SSS1.2

RESECAN: citizen-driven seismology on an active volcano (Cumbre Vieja, La Palma Island, Canaries) 

Rubén García-Hernández, José Barrancos, Luca D'Auria, Vidal Domínguez, Arturo Montalvo, and Nemesio Pérez

During the last decades, countless seismic sensors have been deployed throughout the planet by different countries and institutions. In recent years, it has been possible to manufacture low-cost MEMS accelerometers thanks to nanotechnology and large-scale development. These devices can be easily configured and accurately synchronized by GPS. Customizable microcontrollers like Arduino or RaspBerryPI can be used to develop low-cost seismic stations capable of local data storage and real-time data transfer. Such stations have a sufficient signal quality to be used for complementing conventional seismic networks.

In recent years Instituto Volcanológico de Canarias (INVOLCAN) has developed a proprietary low-cost seismic station to implement the Canary Islands School Seismic Network (Red Sísmica Escolar Canaria - RESECAN) with multiple objectives:

  • supporting the teaching of geosciences.
  • promoting the scientific vocation.
  • strengthening the resilience of the local communities by improving awareness toward volcanism and the associated hazards.
  • Densifying the existing seismic networks.

On Sept. 19th 2021, a volcanic eruption started on the Cumbre Vieja volcano in La Palma. The eruption was proceeded and accompanied by thousands of earthquakes, many of them felt with intensities up to V MCS. Exploiting the attention drawn by the eruption, INVOLCAN started the deployment of low-cost seismic stations in La Palma in educational centres. In this preliminary phase, we selected five educational centres on the island.

The project's objective is to create and distribute low-cost stations in various educational institutions in La Palma and later on the whole Canary Islands Archipelago, supplementing them with educational material on the topics of seismology and volcanology. Each school will be able to access the data of its station, as well as those collected by other centres, being able to locate some of the recorded earthquakes. The data recorded by RESECAN will also be integrated into the broadband seismic network operated by INVOLCAN (Red Sísmica Canaria, C7). RESECAN will be an instrument of scientific utility capable of contributing effectively to the volcano monitoring of the Canary Islands, reinforcing its resilience with respect to future volcanic emergencies.

How to cite: García-Hernández, R., Barrancos, J., D'Auria, L., Domínguez, V., Montalvo, A., and Pérez, N.: RESECAN: citizen-driven seismology on an active volcano (Cumbre Vieja, La Palma Island, Canaries), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5571, https://doi.org/10.5194/egusphere-egu22-5571, 2022.

EGU22-6970 | Presentations | ITS3.1/SSS1.2

Analysis of individual learning outcomes of students and teachers in the citizen science project TeaTime4Schools 

Anna Wawra, Martin Scheuch, Bernhard Stürmer, and Taru Sanden

Only a few of the increasing number of citizen science projects set out to determine the projects impact on diverse learning outcomes of citizen scientists. However, besides pure completion of project activities and data collection, measurable benefits as individual learning outcomes (ILOs) (Phillips et al. 2014) should reward voluntary work.

Within the citizen science project „TeaTime4Schools“, Austrian students in the range of 13 to 18 years collected data as a group activity in a teacher guided school context; tea bags were buried into soil to investigate litter decomposition. In an online questionnaire a set of selected scales of ILOs (Phillips et al. 2014, Keleman-Finan et al. 2018, Wilde et al. 2009) were applied to test those ILOs of students who participated in TeaTime4Schools. Several indicators (scales for project-related response, interest in science, interest in soil, environmental activism, and self-efficacy) were specifically tailored from these evaluation frameworks to measure four main learning outcomes: interest, motivation, behavior, self-efficacy. In total, 106 valid replies of students were analyzed. In addition, 21 teachers who participated in TeaTime4Schools, answered a separate online questionnaire that directly asked about quality and liking of methods used in the project based on suggested scales about learning tasks of University College for Agricultural and Environmental Education (2015), which were modified for the purpose of this study. Findings of our research will be presented.

How to cite: Wawra, A., Scheuch, M., Stürmer, B., and Sanden, T.: Analysis of individual learning outcomes of students and teachers in the citizen science project TeaTime4Schools, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6970, https://doi.org/10.5194/egusphere-egu22-6970, 2022.

EGU22-7164 | Presentations | ITS3.1/SSS1.2

Seismic and air monitoring observatory for greater Beirut : a citizen observatory of the "urban health" of Beirut 

Cecile Cornou, Laurent Drapeau, Youssef El Bakouny, Samer Lahoud, Alain Polikovitch, Chadi Abdallah, Charbel Abou Chakra, Charbel Afif, Ahmad Al Bitar, Stephane Cartier, Pascal Fanice, Johnny Fenianos, Bertrand Guillier, Carla Khater, and Gabriel Khoury and the SMOAG Team

Already sensitive because of its geology (seismic-tsunamic risk) and its interface between arid and temperate ecosystems, the Mediterranean Basin is being transformed by climate change and major urban pressure on resources and spaces. Lebanon concentrates on a small territory the environmental, climatic, health, social and political crises of the Middle East: shortages and degradation of surface and groundwater quality, air pollution, landscape fragmentation, destruction of ecosystems, erosion of biodiversity, telluric risks and very few mechanisms of information, prevention and protection against these vulnerabilities. Further, Lebanon is sorely lacking in environmental data at sufficient temporal and spatial scales to cover the range of key phenomena and to allow the integration of environmental issues for the country's development. This absence was sadly illustrated during the August 4th, 2020, explosion at the port of Beirut, which hindered the effective management of induced threats to protect the inhabitants. In this degraded context combined with a systemic crisis situation in Lebanon, frugal  innovation is more than an option, it is a necessity. Initiated in 2021 within the framework of the O-LIFE lebanese-french research consortium (www.o-life.org), the « Seismic and air monitoring observatory  for greater Beirut » (SMOAG) project aims at setting up a citizen observatory of the urban health of Beirut by deploying innovative, connected, low-cost, energy-efficient and robust environmental and seismological instruments. Through co-constructed web services and mobile applications with various stakeholders (citizens, NGOs, decision makers and scientists), the SMOAG citizen observatory will contribute to the information and mobilization of Lebanese citizens and managers by sharing the monitoring of key indicators associated with air quality, heat islands and building stability, essential issues for a sustainable Beirut.

The first phase of the project was dedicated to the development of a low-cost environmental sensor enabling pollution and urban weather measurements (particle matters, SO2, CO, O3, N02, solar radiation, wind speed, temperature, humidity, rainfall) and to the development of all the software infrastructure, from data acquisition to the synoptic indicators accessible via web and mobile application, while following the standards of the Sensor Web Enablement and Sensor Observation System of the OGC and to the FAIR principles (Easy to find, Accessible, Interoperable, Reusable). A website and Android/IOS applications for the restitution of data and indicators and a dashboard allowing real time access to data have been developed. Environmental and low-cost seismological stations (Raspberry Shake) have been already deployed in Beirut, most of them hosted by Lebanese citizens. These instrumental and open data access efforts were completed by participatory workshops with various stakeholders  to improve the ergonomy of the web and application interfaces and to define roadmap for the implantation of future stations, consistently with  most vulnerable populations identified by NGOs and the current knowledge on the air pollution and heat islands in Beirut.

How to cite: Cornou, C., Drapeau, L., El Bakouny, Y., Lahoud, S., Polikovitch, A., Abdallah, C., Abou Chakra, C., Afif, C., Al Bitar, A., Cartier, S., Fanice, P., Fenianos, J., Guillier, B., Khater, C., and Khoury, G. and the SMOAG Team: Seismic and air monitoring observatory for greater Beirut : a citizen observatory of the "urban health" of Beirut, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7164, https://doi.org/10.5194/egusphere-egu22-7164, 2022.

EGU22-7323 | Presentations | ITS3.1/SSS1.2

Citizen science for better water quality management in the Brantas catchment, Indonesia? Preliminary results 

Reza Pramana, Schuyler Houser, Daru Rini, and Maurits Ertsen

Water quality in the rivers and tributaries of the Brantas catchment (about 12.000 km2) is deteriorating due to various reasons, including rapid economic development, insufficient domestic water treatment and waste management, and industrial pollution. Various water quality parameters are at least measured on monthly basis by agencies involved in water resource development and management. However, measurements consistently demonstrate exceedance of the local water quality standards. Recent claims presented by the local Environmental Protection Agency indicate that the water quality is much more affected by the domestic sources compared to the others. In an attempt to examine this, we proposed a citizen science campaign by involving people from seven communities living close to the river, a network organisation that works on water quality monitoring, three government agencies, and students from a local university. Beginning in 2022, we kicked off our campaign by measuring with test strips for nitrate, nitrite, and phosphate on weekly basis at twelve different locations from upstream to downstream of the catchment. In the effort to provide education on water stewardship and empower citizens to participate in water quality management, preliminary results – the test strips, strategies, and challenges - will be shown.

How to cite: Pramana, R., Houser, S., Rini, D., and Ertsen, M.: Citizen science for better water quality management in the Brantas catchment, Indonesia? Preliminary results, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7323, https://doi.org/10.5194/egusphere-egu22-7323, 2022.

EGU22-7916 | Presentations | ITS3.1/SSS1.2

Citizen science - an invaluable tool for obtaining high-resolution spatial and temporal meteorological data 

Jadranka Sepic, Jure Vranic, Ivica Aviani, Drago Milanovic, and Miro Burazer

Available quality-checked institutional meteorological data is often not measured at locations of particular interest for observing specific small-scale and meso-scale atmospheric processes. Similarly, institutional data can be hard to obtain due to data policy restrictions. On the other hand, a lot of people are highly interested in meteorology, and they frequently deploy meteorological instruments at locations where they live. Such citizen data are often shared through public data repositories and websites with sophisticated visualization routines.  As a result, the networks of citizen meteorological stations are, in numerous areas, denser and more easily accessible than are the institutional meteorological networks.  

Several examples of publicly available citizen meteorological networks, including school networks, are explored – and their application to published high-quality scientific papers is discussed. It is shown that for the data-based analysis of specific atmospheric processes of interest, such as mesoscale convective disturbances and mesoscale atmospheric gravity waves, the best qualitative and quantitative results are often obtained using densely populated citizen networks.  

Finally, a “cheap and easy to do” project of constructing a meteorological station with a variable number of atmospheric sensors is presented. Suggestions on how to use such stations in educational and citizen science activities, and even in real-time warning systems, are given.  

How to cite: Sepic, J., Vranic, J., Aviani, I., Milanovic, D., and Burazer, M.: Citizen science - an invaluable tool for obtaining high-resolution spatial and temporal meteorological data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7916, https://doi.org/10.5194/egusphere-egu22-7916, 2022.

Among the greatest constraints to accurately monitoring and understanding climate and climate change in many locations is limited in situ observing capacity and resolution in these places. Climate behaviours along with dependent environmental and societal processes are frequently highly localized, while observing systems in the region may be separated by hundreds of kilometers and may not adequately represent conditions between them. Similarly, generating climate equity in urban regions can be hindered by an inability to resolve urban heat islands at neighborhood scales. In both cases, higher density observations are necessary for accurate condition monitoring, research, and for the calibration and validation of remote sensing products and predictive models. Coincidentally, urban neighborhoods are heavily populated and thousands of individuals visit remote locations each day for recreational purposes. Many of these individuals are concerned about climate change and are keen to contribute to climate solutions. However, there are several challenges to creating a voluntary citizen science climate observing program that addresses these opportunities. The first is that such a program has the potential for limited uptake if participants are required to volunteer their time or incur a significant cost to participate. The second is that researchers and decision-makers may be reluctant to use the collected data owing to concern over observer bias. This paper describes the on-going development and implementation by 2DegreesC.org of a technology-driven citizen science approach in which participants are equipped with low-cost automated sensors that systematically sample and communicate scientifically valid climate observations while they focus on other activities (e.g., recreation, gardening, fitness). Observations are acquired by a cloud-based system that quality controls, anonymizes, and makes them openly available. Simultaneously, individuals of all backgrounds who share a love of the outdoors become engaged in the scientific process via data-driven communication, research, and educational interactions. Because costs and training are minimized as barriers to participation, data collection is opportunistic, and the technology can be used almost anywhere, this approach is dynamically scalable with the potential for millions of participants to collect billions of new, accurate observations that integrate with and enhance existing observational network capacity.

How to cite: Shein, K.: Linking citizen scientists with technology to reduce climate data gaps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10634, https://doi.org/10.5194/egusphere-egu22-10634, 2022.

The 2019-2020 bushfire season (the Black Summer) in Australia was unprecedented in its breadth and severity as well as the disrupted resources and time dedicated to studying it.  Right after one of the most extreme fire seasons on record had hit Australia, a once-in-a-century global pandemic, COVID-19, occurred. This pandemic caused world-wide lockdowns throughout 2020 and 2021 that prevented travel and field work, thus hindering researchers from assessing damage done by the Black Summer bushfires. Early assessments show that the bushfires on Kangaroo Island, South Australia caused declines in soil nutrients and ground coverage up to 10 months post-fire, indicating higher risk of soil erosion and fire-induced land degradation at this location. In parallel to the direct impacts the Black Summer bushfires had on native vegetation and soil, the New South Wales Nature Conservation Council observed a noticeable increase in demand for fire management workshops in 2020. What was observed of fires and post-fire outcomes on soil and vegetation from the 2019-2020 bushfire season that drove so many citizens into action? In collaboration with the New South Wales Nature Conservation Council and Rural Fire Service through the Hotspots Fire Project, we will be surveying and interviewing landowners across New South Wales to collect their observations and insights regarding the Black Summer. By engaging landowners, this project aims to answer the following: within New South Wales, Australia, what impact did the 2019-2020 fire season have on a) soil health and native vegetation and b) human behaviours and perceptions of fire in the Australian landscape. The quantity of insights gained from NSW citizens will provide a broad assessment of fire impacts across multiple soil and ecosystem types, providing knowledge of the impacts of severe fires, such as those that occurred during the Black Summer, to the scientific community. Furthermore, with knowledge gained from reflections from citizens, the Hotspots Fire Project will be better able to train and support workshop participants, while expanding the coverage of workshops to improve support of landowners across the state. Data regarding fire impacts on soil, ecosystems, and communities has been collected by unknowing citizen scientists all across New South Wales, and to gain access to that data, we need only ask.

How to cite: Ondik, M., Ooi, M., and Muñoz-Rojas, M.: Insights from landowners on Australia's Black Summer bushfires: impacts on soil and vegetation, perceptions, and behaviours, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10776, https://doi.org/10.5194/egusphere-egu22-10776, 2022.

High air pollution concentration levels and increased urban heat island intensity, are amongst the most critical contemporary urban health concerns. This is the reason why various municipalities are starting to invest in extensive direct air quality and microclimate sensing networks. Through the study of these datasets it has become evident that the understanding of inter-urban environmental gradients is imperative to effectively introduce urban land-use strategies to improve the environmental conditions in the neighborhoods that suffer the most, and develop city-scale urban planning solutions for a better urban health.  However, given economic limitations or divergent political views, extensive direct sensing environmental networks have yet not been implemented in most cities. While the validity of citizen science environmental datasets is often questioned given that they rely on low-cost sensing technologies and fail to incorporate sensor calibration protocols, they can offer an alternative to municipal sensing networks if the necessary Quality Assurance / Quality Control (QA/QC) protocols are put in place.

This research has focused on the development of a QA/QC protocol for the study of urban environmental data collected by the citizen science PurpleAir initiative implemented in the Bay Area and the city of Los Angeles where over 700 purple air stations have been implemented in the last years. Following the QA/QC process the PurpleAir data was studied in combination with remote sensing datasets on land surface temperature and normalized difference vegetation index, and geospatial datasets on socio-demographic and urban fabric parameters. Through a footprint-based study, and for all PurpleAir station locations, the featured variables and the buffer sizes with higher correlations have been identified to compute the inter-urban environmental gradient predictions making use of 3 supervised machine learning models: - Regression Tree Ensemble, Support Vector Machine, and a Gaussian Process Regression.

How to cite: Llaguno-Munitxa, M., Bou-Zeid, E., Rueda, P., and Shu, X.: Citizen-science urban environmental monitoring for the development of an inter-urban environmental prediction model for the city of Los Angeles, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11765, https://doi.org/10.5194/egusphere-egu22-11765, 2022.

EGU22-11797 | Presentations | ITS3.1/SSS1.2

Attitudes towards a cafetiere-style filter system and paper-based analysis pad for soil nutrition surveillance in-situ: evidence from Kenya and Vietnam 

Samantha Richardson, Philip Kamau, Katie J Parsons, Florence Halstead, Ibrahim Ndirangu, Vo Quang Minh, Van Pham Dang Tri, Hue Le, Nicole Pamme, and Jesse Gitaka

Routine monitoring of soil chemistry is needed for effective crop management since a poor understanding of nutrient levels affects crop yields and ultimately farmers’ livelihoods.1 In low- and middle-income countries soil sampling is usually limited, due to required access to analytical services and high costs of portable sampling equipment.2 We are developing portable and low-cost sampling and analysis tools which would enable farmers to test their own land and make informed decisions around the need for fertilizers. In this study we aimed to understand attitudes of key stakeholders towards this technology and towards collecting the data gathered on public databases which could inform decisions at government level to better manage agriculture across a country.

 

In Kenya, we surveyed 549 stakeholders from Murang’a and Kiambu counties, 77% men and 23% women. 17.2% of these respondent smallholder farmers were youthful farmers aged 18-35 years with 81.9% male and 18.1% female-headed farming enterprises. The survey covered current knowledge of soil nutrition, existing soil management practices, desire to sample soil in the future, attitudes towards our developed prototypes, motivation towards democratization of soil data, and willingness to pay for the technology. In Vietnam a smaller mixed methods online survey was distributed via national farming unions to 27 stakeholders, in particular engaging younger farmers with an interest in technology and innovation.

Within the Kenya cohort, only 1.5% of farmers currently test for nutrients and pH. Reasons given for not testing included a lack of knowledge about soil testing (35%), distance to testing centers (34%) and high costs (16%). However, 97% of respondents were interested in soil sampling at least once a year, particularly monitoring nitrates and phosphates. Nearly all participants, 94-99% among the males/females/youths found cost of repeated analysis of soil samples costing around USD 11-12 as affordable for their business. Regarding sharing the collecting data, 88% believed this would be beneficial, for example citing that data shared with intervention agencies and agricultural officers could help them receive relevant advice.

In Vietnam, 87% of famers did not have their soil nutrient levels tested with 62% saying they did not know how and 28% indicating prohibitive costs. Most currently relied on local knowledge and observations to improve their soil quality. 87% thought that the system we were proposing was affordable with only 6% saying they would not be interested in trialing this new technology. Regarding the soil data, respondents felt that it should be open access and available to everyone.

Our surveys confirmed the need and perceived benefit for our proposed simple-to-operate and cost-effective workflow, which would enable farmers to test soil chemistry themselves on their own land. Farmers were also found to be motivated towards sharing their soil data to get advice from government agencies. The survey results will inform our further development of low-cost, portable analytical tools for simple on-site measurements of nutrient levels within soil.

 

1. Dimkpa, C., et al., Sustainable Agriculture Reviews, 2017, 25, 1-43.

2. Zingore, S., et al., Better Crops, 2015, 99 (1), 24-26.

How to cite: Richardson, S., Kamau, P., Parsons, K. J., Halstead, F., Ndirangu, I., Minh, V. Q., Tri, V. P. D., Le, H., Pamme, N., and Gitaka, J.: Attitudes towards a cafetiere-style filter system and paper-based analysis pad for soil nutrition surveillance in-situ: evidence from Kenya and Vietnam, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11797, https://doi.org/10.5194/egusphere-egu22-11797, 2022.

Keywords: preconcentration, heavy metal, cafetiere, citizen science, paper-based microfluidics

Heavy-metal analysis of water samples using microfluidics paper-based analytical devices (µPAD) with colourimetric readout is of great interest due to its simplicity, affordability and potential for Citizen Science-based data collection [1]. However, this approach is limited by the relatively poor sensitivity of the colourimetric substrates, typically achieving detection within the mg L-1 range, whereas heavy-metals exist in the environment at <μg L-1 quantities   [2]. Preconcentration is commonly used when analyte concentration is below the analytical range, but this typically requires laboratory equipment and expert users [3]. Here, we are developing a simple method for pre-concentration of heavy metals, to be integrated with a µPAD workflow that would allow Citizen Scientists to carry out pre-concentration as well as readout on-site.

The filter mesh from an off-the-shelf cafetière (350 mL) was replaced with a custom-made bead carrier basket, laser cut in PMMA sheet featuring >500 evenly spread 100 µm diameter holes. This allowed the water sample to pass through the basket and mix efficiently with the 2.6 g ion-exchange resin beads housed within (Lewatit® TP207, Ambersep® M4195, Lewatit® MonoPlus SP 112). An aqueous Ni2+ sample (0.3 mg L-1, 300 mL) was placed in the cafetiere and the basket containing ion exchange material was moved up and down for 5 min to allow Ni2+ adsorption onto the resin. Initial investigations into elution with a safe, non-toxic eluent focused on using NaCl (5 M). These were carried out by placing the elution solution into a shallow dish and into which the the resin containing carrier basket was submerging. UV/vis spectroscopy via a colourimetric reaction with nioxime was used to monitor Ni2+ absorption and elution.

After 5 min of mixing it was found that Lewatit® TP207 and Ambersep® M4195 resins adsorbed up to 90% of the Ni2+ ions present in solution and the Lewatit® MonoPlus SP 112 adsorbed up to 60%. However, the Lewatit® MonoPlus SP 112 resin performed better for elution with NaCl. Initial studies showed up to 30% of the Ni2+ was eluted within only 1 min of mixing with 10 mL 5 M NaCl.

Using a cafetière as pre-concentration vessel coupled with non-hazardous reagents in the pre-concentration process allows involvement of citizen scientists in more advanced environmental monitoring activities that cannot be achieved with a simple paper-based sensor alone. Future work will investigate the user-friendliness of the design by trialling the system with volunteers and will aim to further improve the trapping and elution efficiencies.

 

References:

  • Almeida, M., et al., Talanta, 2018, 177, 176-190.
  • Lace, A., J. Cleary, Chemosens., 2021. 9, 60.
  • Alahmad, W., et al.. Biosens. Bioelectron., 2021. 194, 113574.

 

How to cite: Sari, M., Richardson, S., Mayes, W., Lorch, M., and Pamme, N.: Method development for on-site freshwater analysis with pre-concentration of nickel via ion-exchange resins embedded in a cafetière system and paper-based analytical devices for readout, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11892, https://doi.org/10.5194/egusphere-egu22-11892, 2022.

EGU22-12972 | Presentations | ITS3.1/SSS1.2 | Highlight

Collection of valuable polar data and increase in nature awareness among travellers by using Expedition Cruise Ships as platforms of opportunity 

Verena Meraldi, Tudor Morgan, Amanda Lynnes, and Ylva Grams

Hurtigruten Expeditions, a member of the International Association of Antarctica Tour Operators (IAATO) and the Association of Arctic Expedition Cruise Operators (AECO) has been visiting the fragile polar environments for two decades, witnessing the effects of climate change. Tourism and the number of ships in the polar regions has grown significantly. As a stakeholder aware of the need for long-term protection of these regions, we promote safe and environmentally responsible operations, invest in the understanding and conservation of the areas we visit, and focus on the enrichment of our guests.

For the last couple of years, we have supported the scientific community by transporting researchers and their equipment to and from their study areas in polar regions and we have established collaborations with numerous scientific institutions. In parallel we developed our science program with the goal of educating our guests about the natural environments they are in, as well as to further support the scientific community by providing our ships as platforms of opportunity for spatial and temporal data collection. Participation in Citizen Science programs that complement our lecture program provides an additional education opportunity for guests to better understand the challenges the visited environment faces while contributing to filling scientific knowledge gaps in remote areas and providing data for evidence-based decision making.

We aim to continue working alongside the scientific community and developing partnerships. We believe that scientific research and monitoring in the Arctic and Antarctic can hugely benefit from the reoccurring presence of our vessels in these areas, as shown by the many projects we have supported so far. In addition, our partnership with the Polar Citizen Science Collective, a charity that facilitates interaction between scientists running Citizen Science projects and expedition tour operators, will allow the development of programs on an industry level, rather than just an operator level, increasing the availability and choice of platforms of opportunity for the scientific community.

How to cite: Meraldi, V., Morgan, T., Lynnes, A., and Grams, Y.: Collection of valuable polar data and increase in nature awareness among travellers by using Expedition Cruise Ships as platforms of opportunity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12972, https://doi.org/10.5194/egusphere-egu22-12972, 2022.

EGU22-13115 | Presentations | ITS3.1/SSS1.2

Participatory rainfall monitoring: strengthening hydrometeorological risk management and community resilience in Peru 

Miguel Arestegui, Miluska Ordoñez, Abel Cisneros, Giorgio Madueño, Cinthia Almeida, Vannia Aliaga, Nelson Quispe, Carlos Millán, Waldo Lavado, Samuel Huaman, and Jeremy Phillips

Heavy rainfall, floods and debris flow on the Rimac river watershed are recurring events that impact Peruvian people in vulnerable situations.There are few historical records, in terms of hydrometeorological variables, with sufficient temporal and spatial accuracy. As a result, Early Warning Systems (EWS) efficiency, dealing with these hazards, is critically limited.

In order to tackle this challenge, among other objectives, the Participatory Monitoring Network (Red de Monitoreo Participativo or Red MoP, in spanish) was formed: an alternative monitoring system supported by voluntary community collaboration of local population under a citizen science approach. This network collects and communicates data captured with standardized manual rain gauges (< 3USD). So far, it covers districts in the east metropolitan area of the capital city of Lima, on dense peri-urban areas, districts on the upper Rimac watershed on rural towns, and expanding to other upper watersheds as well.

Initially led by Practical Action as part of the Zurich Flood Resilience Alliance, it is now also supported by SENAMHI (National Meteorological and Hydrological Service) and INICTEL-UNI (National Telecommunications Research and Training Institute), as an activity of the National EWS Network (RNAT).

For the 2019-2022 rainfall seasons, the network has been gathering data and information from around 80 volunteers located throughout the Rimac and Chillon river watersheds (community members, local governments officers, among others): precipitation, other meteorological variables, and information regarding the occurrence of events such as floods and debris flow (locally known as huaycos). SENAMHI has provided a focalized 24h forecast for the area covered by the volunteers, experimentally combines official stations data with the network’s for spatial analysis of rainfall, and, with researchers from the University of Bristol, analyses potential uses of events gathered through this network. In order to facilitate and automatize certain processes, INICTEL-UNI developed a web-platform and a mobile application that is being piloted.

We present an analysis of events and trends gathered through this initiative (such as a debris flow occurred in 2019). Specifically, hotspots and potential uses of this sort of refined spatialized rainfall information in the dry & tropical Andes. As well, we present a qualitative analysis of volunteers’ expectations and perceptions. Finally, we also present a meteorological explanation of selected events, supporting the importance of measuring localized precipitation during the occurrence of extreme events in similar complex, physical and social contexts.

How to cite: Arestegui, M., Ordoñez, M., Cisneros, A., Madueño, G., Almeida, C., Aliaga, V., Quispe, N., Millán, C., Lavado, W., Huaman, S., and Phillips, J.: Participatory rainfall monitoring: strengthening hydrometeorological risk management and community resilience in Peru, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13115, https://doi.org/10.5194/egusphere-egu22-13115, 2022.

ESSI4 – Advanced Technologies and Informatics Enabling Transdisciplinary Science

EGU22-873 | Presentations | ESSI4.3

UrbanTEP – Earth Observation Based Services for the Urban Community 

Felix Bachofer, Martin Boettcher, Enguerran Boissier, Gunnar Brandt, Carsten Brockmann, Thomas Esch, Stefanie Feuerstein, Pedro Goncalves, Mattia Marconcini, Michal Opletal, Fabrizio Pacini, Marc Paganini, Tomas Soukup, and Vaclav Svaton

With the increasing volume of information from satellites observing Earth, the technical and methodological prerequisites of users in science and applications are becoming more demanding and complex for generating demand-driven products while exploiting the full potential of large Earth observation (EO) data archives. Since 2014, the European Space Agency (ESA) is addressing this challenge with the concept of Thematic Exploitation Platforms (TEPs), aiming to create an ecosystem of interconnected platforms providing thematic EO-based data and services for currently seven thematic sectors.

The built-environment and urban sector is addressed with the Urban Thematic Exploitation Platform (UrbanTEP; urban-tep.eu), acknowledging that urbanization and sustainable settlement growth are key global challenges. The linkages to socio-economic development, health, environment, greenhouse gas emissions, climate change and other sectors are deep and multi-faceted. EO based services and resulting information products and other spatial datasets have successfully found their way into planning and decision-making processes that address the urban ecosystem. While a range of downstream services are based on solitary and effortful processing and visualization solutions, the platform-based approach has proven to be a game changing technology, being capable of revolutionizing service provision, workflows and information products.

UrbanTEP is a collaborative system, which focuses on EO data provision, processing and other spatial products for delivering multi-source information on trans-sectoral urban challenges on various scales. It is developed to provide end-to-end and ready-to-use solutions for a wide spectrum of users in the public and private sector. The core system components are an open, web-based portal connected to distributed and scalable high-level computing infrastructures and providing key functionalities for:

  • high-performance data access and processing (IaaS – Infrastructure as a Service),
  • modular and generic state-of-the art pre-processing, analysis, and visualization tools and algorithms (SaaS – Software as a Service),
  • customized development and sharing of algorithms, products and services (PaaS – Platform as a Service), and
  • networking and communication.

The facilitation of EO service acceptance and uptake by the urban community, as well as the onboarding of third-party service providers are essential to PaaS solutions. UrbanTEP is therefore in the process of expanding the range of service solutions and the interconnection with other service providers. The concept of “City Data Cubes” is introduced for urban use cases and algorithm hosting capabilities (“algo-as-as-service” functionalities) are improved by adopting the OGC Common Architecture standard. In addition, the data analytics and visualization capabilities of UrbanTEP provide functionalities for a user-driven derivation of key urban indicators based on the above-mentioned multi-source data collections. The provision of premium urban information products, like the World Settlement Footprint (WSF) outlining built-up areas globally, allows users and service providers to derive customized demand-driven EO-based products.

How to cite: Bachofer, F., Boettcher, M., Boissier, E., Brandt, G., Brockmann, C., Esch, T., Feuerstein, S., Goncalves, P., Marconcini, M., Opletal, M., Pacini, F., Paganini, M., Soukup, T., and Svaton, V.: UrbanTEP – Earth Observation Based Services for the Urban Community, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-873, https://doi.org/10.5194/egusphere-egu22-873, 2022.

EGU22-5643 | Presentations | ESSI4.3

ASI’s roadmap towards scientific downstream applications of satellite data 

Deodato Tapete and Alessandro Coletta

Within the Italian Government’s guidelines on space and aerospace matters to achieve the strategic objectives of the national space policy [1], the “Telecommunications, Earth Observation and Navigation” (TLC/EO/NAV) sector is the first listed by priority order. TLC/EO/NAV satellite services and applications (the so-called “downstream”) will be exploited by citizens and valorized by Institutions under an integrated application perspective. The downstream sector is therefore a key element to maximize the socio-economic impact of investments in the space sector.

In this context, the Italian Space Agency (ASI) aims to bring its contribution, by stimulating the downstream development through initiatives aiming to promote the use of national and European space systems, and the demonstration of new techniques and procedures for information generation to create products and deliver innovation services.

The present paper focuses on the “scientific downstream”, i.e. the (pre-)operational exploitation of state-of-the-art processing and analytical workflows of TLC/EO/NAV data that have been designed, tested, validated and demonstrated by researchers and academia to formerly answer a specific technical-scientific question (e.g. a more accurate retrieval of a geophysical parameter such as soil moisture in vegetated and crop areas) and are brought to a development and engineered stage so as to generate end-use or value-added products (e.g. maps of multi-temporal spatial variation of soil moisture vs. rainfall and irrigation practices, at a temporal frequency as satellite data allow).

To accelerate a faster uptake of satellite-based technologies for the geosciences as new EO missions are launched and made operative – COSMO-SkyMed First and Second Generation in the Synthetic Aperture Radar domain, and PRISMA in the hyperspectral –, ASI is running several initiatives, including:

  • data exploitation [e.g. 2], to make users more acquainted with satellite data and consolidate or prepare for new applications;
  • joint research projects with the national scientific community [e.g. 3], to develop novel algorithms up to at least a Scientific Readiness Level (SRL) of 4, i.e. “Proof of concept”, according to ESA SRL Handbook EOP-SM/2776;
  • dedicated R&D programs for SAR and hyperspectral algorithm developments, supporting projects that aim to address key application domains (e.g. precision agriculture, natural hazards, urban areas);
  • prototyping thematic platforms allowing consolidated algorithms and processing routines to be used for generation of EO-based products [e.g. 4];
  • launch a new program for demonstration projects to capitalize the above algorithm legacy and prepare the scientific downstream.

This paper will discuss ASI’s current activities, achievements, lessons learnt and ongoing developments in the accomplishment of the above roadmap.

[1] https://presidenza.governo.it/AmministrazioneTrasparente/Organizzazione/ArticolazioneUffici/UfficiDirettaPresidente/UfficiDiretta_CONTE/COMINT/DEL_20190325_aerospazio.pdf

[2] Battagliere et al. (2021) Satellite X-band SAR data exploitation trends in the framework of ASI’s COSMO-SkyMed Open Call initiative, Procedia Computer Science 181, 1041–1048.

[3] Tapete et al. (2020) Development of algorithms for the estimation of hydrological parameters combining COSMO-SkyMed and Sentinel time series with in situ measurements, IEEE M2GARSS 2020, 53–56.

[4] Candela et al. (2021) “The Italian Thematic Platform costeLAB: from Earth Observation Big Data to Products in support to Coastal Applications and Downstream,” Proceedings of the 2021 conference on Big Data from Space, EUR 30697 EN, ISBN 978-92-76-37661-3, doi:10.2760/125905, JRC125131.

How to cite: Tapete, D. and Coletta, A.: ASI’s roadmap towards scientific downstream applications of satellite data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5643, https://doi.org/10.5194/egusphere-egu22-5643, 2022.

EGU22-5993 | Presentations | ESSI4.3

Lessons learned from e-shape H2020 Project on the use of the Cloud for Earth Observation 

Marie-Francoise Voidrot-Martinez, Nils Hempelmann, and Josh Lieberman

The recent OGC Cloud Concept Development Study [1]  has shown that the major (big) Geospatial Data providers are going towards Cloud solutions not only to make more data more accessible, but also to locate data processing next to the data. Meanwhile, recent experiences from the H2020 e-shape project show that the EO developers community still needs support to fully adopt the Cloud all the more that based on the feedback received during e- shape’s first sprint, the Earth Observation Cloud platforms still need to mature to be more attractive. In order to support the good connection between Data providers, Technology providers and EO developers, it is critical that sponsors keep on supporting the efforts from the Earth Observation community at a number of levels: Enhancing Copernicus and other open data accessibility, developing Clouds and platforms interoperability and operational maturity, increasing cloud skills among developers and scientists, sustaining funding mechanisms long enough to allow the rendez-vous in the Cloud of all the critical stakeholders with good timing to reach the critical point of self-sustainability.

During this process it is important to not only develop the technical skills and new platforms capacities, but also to develop a good understanding of the pricing mechanisms and how to optimize the costs. This is very needed to develop the trust that outsourcing infrastructures will lead to the expected budget savings and  to trigger the budgets organization evolutions that  moving to Cloud technologies requires. 

 

[1]  Echterhoff, J., Wagermann, J., Lieberman, J.: OGC 21-023, OGC Earth Observation Cloud Platform COncept Development Study Report. Open Geospatial Consortium (2021). https://docs.ogc.org/per/21-023.html

How to cite: Voidrot-Martinez, M.-F., Hempelmann, N., and Lieberman, J.: Lessons learned from e-shape H2020 Project on the use of the Cloud for Earth Observation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5993, https://doi.org/10.5194/egusphere-egu22-5993, 2022.

EGU22-6225 | Presentations | ESSI4.3

Detection of Double-Cropping Systems Using Machine Learning and Sentinel 2 Imagery - A Case Study of Bačka and Srem Regions, Serbia     

Miljana Marković, Predrag Lugonja, Sanja Brdar, Branislav Živaljević, and Vladimir Crnojević

Increasing agricultural production is inevitable in the future since population growth and climate change have led to significant pressure on global food security. One of the ways is to intensify the existing cropland by multi-cropping practice, allowing multiple uses of a single field during one year.  This research aims to identify and map double-cropping land using multi-temporal Sentinel 2 imagery from 2021 and advanced machine learning models. The case study focus is on Bačka and Srem, regions located in the Autonomous Province of Vojvodina, Republic of Serbia. These regions are characterized by fertile land and widespread agriculture production. However, there is a low presence of double-cropping practice due to usually dry summers, but with a tendency to change as the number of irrigation systems increase.

Considering the small amount of double-cropping fields, there is a need for direct ground truth data collection. For that reason, the first step was to reduce the area of interest to get insight into the locations of potential double-cropping land. This result was obtained by using the threshold method based on the phenology of crops during the year. The NDVI (Normalized Difference Vegetation Index) time series was utilized to define appropriate thresholds for feature two peak values to discriminate double-cropping within each pixel. The identification of the results was used on-site for collecting ground truth data. Based on the collected data and the analyzed NDVI time series, besides double-crop, three more classes of arable land were distincted and included in the classification: single winter crops, single summer crops and clover. The collected data contained 46 parcels of double crops, 43 single winter crops, 55 single summer crops and 27 parcels of clover. We used time-series images to create a dataset for training the pixel-based Random Forest classification. The results showed a very high overall accuracy of 99% and an  F-score higher than 0.9 for each of the classes.

This methodology is a suitable approach for detecting double-cropping systems, with further potential to identify exact crop types and the main practice of combining crops. The findings of this study showed that only about 2% of the study area was under this production. Except for positive economic outcomes, utilizing these systems brings significant environmental benefits and rational use of the soil without expanding physical cropland but with the same advantages. Therefore, the resulting geospatial datasets of double cropping croplands could help solve important questions relevant to food security, irrigation and climate change.

How to cite: Marković, M., Lugonja, P., Brdar, S., Živaljević, B., and Crnojević, V.: Detection of Double-Cropping Systems Using Machine Learning and Sentinel 2 Imagery - A Case Study of Bačka and Srem Regions, Serbia    , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6225, https://doi.org/10.5194/egusphere-egu22-6225, 2022.

EGU22-11073 | Presentations | ESSI4.3

The ADAM federated data handling platform to enable scientific services development 

Stefano Natali, Simone Mantovani, Clemens Rendl, and Ramiro Marco Figuera

The concept of ‘Digital Earth’ (DE), as outlined in 1999 by the former US Vice-President Al Gore, foresees a “multi-resolution, three-dimensional representation of the planet that would make it possible to find, visualise and make sense of vast amounts of geo-referenced information on physical and social environments”. The DE concept is quickly becoming reality, with a strong dynamic component provided by real time data, forecast and projections. The Copernicus programme provides a fundamental contribution to this concept. The challenge is to access and extract information from distributed data centres containing decades of global and local environmental data generated by in-situ sensors, numerical models, satellites, and individuals.

The Advanced geospatial Data Management platform (ADAM, https://adamplatform.eu/) implements the DE concept: ADAM allows accessing a large variety of multi-year global geospatial collections from satellites (Sentinels, Landsat, MODIS) model analysis and predictions (CAMS, C3S), enabling data discovery, visualization, combination, processing and download. ADAM provides datacubeless access and processing services, namely it exposes multi-dimensional (spatial, temporal, spectral …) subsetting capabilities as well as on-the-fly processing functions, so that the consumer (human or machine) gets only the piece of data wherever and whenever needed, avoiding transferring large amounts of useless bytes or massive local processing. Key feature of the ADAM concept is the standardization of the interfaces: each layer (discovery, access, processing, visualization) exposes OGC (https://www.ogc.org/)-compliant interfaces to foster federation and interoperability.

ADAM is an horizontal (generic) layer to support different vertical domains such as Agriculture, Cultural and natural heritage, marine applications, critical infrastructure monitoring, public health, education and media. This contribution focuses on two main operational applications for atmospheric sciences and climate change assessment and mitigation.

TOP (http://top-platform.eu/) is a web-based platform build on top of the ADAM data exploitation layer offering users from the atmospheric sciences domain a Virtual Research Environment (VRE) to exploit Copernicus atmospheric and climate data products, such as Sentinel-5 P data, CAMS products, European Environmental Agency in-situ measurements. Deployed on the Mund Dias, it is the first operational platform implementing the data triangle (EO, model and in-situ data) and hence creates an atmospheric multi-source data cube, stimulating a multidisciplinary scientific approach due to the availability of various collections.

One of the main effects of evolving climate is change precipitation and temperature regimes: EO provides a fundamental contribution for high resolution monitoring these variables. ADAM offers access to global datasets from Copernicus Climate Change services (C3S), ESA Climate Change initiative (ESA CCI) and the GPM program. In the framework of the ESA EO4SD Climate Resilience cluster (https://eo4sd-climate.gmv.com/), more than 30 climate variables and indicators were computed for climate screening, climate risk assessment and climate adaptation. Indicators are provided to various entities such as the World Bank Climate Change Knowledge Portal (CCKP, https://climateknowledgeportal.worldbank.org/). Another relevant example is the STRENCH project (https://www.interreg-central.eu/Content.Node/STRENCH.html) that allows managers of natural and cultural heritage sites to assess climate risk and define mitigation actions through the use of a dedicated webGIS tool fed by a large pool of climate indicators computed from models and satellite data via ADAM.

How to cite: Natali, S., Mantovani, S., Rendl, C., and Marco Figuera, R.: The ADAM federated data handling platform to enable scientific services development, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11073, https://doi.org/10.5194/egusphere-egu22-11073, 2022.

EGU22-12431 | Presentations | ESSI4.3

Map WhiteBoard - New Technology in Collaborative Research in Smart FArming 

Karel Charvat, Runar Bergheim, Raitis Berzins, Dailis Langovskis, Frantisek Zadrazil, and Hana Kubickova

Earth Observation plays important role in Precision Agriculture. Precision agriculture, holds great promise for modernization of agriculture both in terms of environmental sustainability and economic outlook.  The vast data archives made available through Copernicus and related infrastructures, combined with a low entry threshold into the domain of AI-technologies has made it possible, if not outright easy, to make meaningful predictions that divides individual agricultural fields into zones where variable rates of fertilizer, irrigation and/or pesticide are required for optimal soil productivity and minimized environmental impact. Usage of Earth Observation in Precision Agriculture is in many years subjects of intensive research, but there already exist commercial application. But full potential of EO is till now not utilised. This limits the uptake of precision agriculture technology and thus also the realization of its promised benefits. EO4Agri project in its Strategic Research Agenda identified as one from  priorities for future to support collaborative research of expert from different domains, EO, agriculture, Artificial Intelligence and also direct involvement of farmers or advisors. But till now there didn’t exist platforms, which will be able to support such collaborative research. Now The Map Whiteboard is opening new possibilities for such collaborative research in this domain.

The Map Whiteboard concept at the centre of this submission is intended to plug into the “traditional” workflow of variable rate applications and enables agricultural advisors/extension services and farmers to interact, adjust and share an understanding of the estimations made by the ‘black box’, thus increasing the trust in and improving the quality of the prediction models. The vision of the Map Whiteboard innovation was conceived out of a sequence of large-scale collaborative writing efforts using Google Docs. As opposed to traditional offline word processing tools, Google Docs allows multiple people to edit the same document]—at the same time—allowing all connected clients to see changes made to the document in real-time by synchronising all changes between all connected clients via the server. The ability to work on a shared body of text, avoiding the necessity to integrate fragments from multiple source documents and with multiple styles removed many obstacles associated with traditional document editing. The Map Whiteboard technology seeks to do the same for the traditional use of GIS tools. The overall vision for the technology is that a Map Whiteboard will be to GIS what Google Docs is to word processing. We are now introducing this technology as a tool for collaborative work farmers and advisory services offering them analysis of EO data. The Map Whiteboard is now in intensive tested  and now are integrated tools for online analysis of EO data.

How to cite: Charvat, K., Bergheim, R., Berzins, R., Langovskis, D., Zadrazil, F., and Kubickova, H.: Map WhiteBoard - New Technology in Collaborative Research in Smart FArming, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12431, https://doi.org/10.5194/egusphere-egu22-12431, 2022.

Atmospheric water management or cloud seeding technologies might be effectively applied to assess the impacts from changing climate on water security and renewable energy use. During said assessments it might be possible to exploit their observations to mitigate the negative impacts from climate change by enhancing the water supply as part of a water security plan, and/or by effectively removing low-level supercooled cloud decks/fogs to facilitate renewable energy use providing added sunshine during typically overcast day-time periods. Cloud seeding technologies are used to positively affect the natural hydrologic cycle, while respecting and avoiding damage to public health, safety and the environment.  This talk summarizes atmospheric water management technologies and their use, how these technologies might be applied as part of a strategy to ensure water security and how their application might provide a potential opportunity for recouping lost energy potential.

How to cite: DeFelice, T.: The role atmospheric water management technologies might play in Nature-based solutions (NbS), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1941, https://doi.org/10.5194/egusphere-egu22-1941, 2022.

EGU22-2263 | Presentations | GI6.3

EasyGeoModels: a New Tool to Investigate Seismic and Volcanic Deformations Retrieved through Geodetic Data. Software Implementation and Examples on the Campi Flegrei Caldera and the 2016 Amatrice Earthquake 

Giuseppe Solaro, Sabatino Buonanno, Raffaele Castaldo, Claudio De Luca, Adele Fusco, Mariarosaria Manzo, Susi Pepe, Pietro Tizzani, Emanuela Valerio, Giovanni Zeni, Simone Atzori, and Riccardo Lanari

The increasingly widespread use of space geodesy has resulted in numerous, high-quality surface deformation data sets. DInSAR, for instance, is a well-established satellite technique for investigating tectonically active and volcanic areas characterized by a wide spatial extent of the inherent deformation. These geodetic data can provide important constraints on the involved fault geometry and on its slip distribution as well as on the type and position of an active magmatic source. For this reason, over last years, many researchers have developed robust and semiautomatic methods for inverting suitable models to infer the source type and geometry characteristics from the retrieved surface deformations.

In this work we will present a new software we have implemented, named easyGeoModels, that can be used by geophysicists but also by less skilled users who are interested in sources modeling to determine ground deformation in both seismo-tectonic and volcanic contexts. This software is characterized by some innovative aspects compared to existing similar tools, such as (i) the presence of an easy-to-use graphic interface that allows the user, even if not particularly expert, to manage the data to be inverted, the input parameters of one or more sources, the choice of the deformation source (s), effective and simple way; (ii) the possibility of selecting the GPS data to be inverted, simply by selecting the area of interest: in this case the software will automatically consider for the inversion only the GPS stations present in the selected area and will download the relative data from the Nevada Geodetic Laboratory site; (iii) the generation of output files in Geotiff, KMZ and Shapefile format, which allow a faster and more immediate visualization through GIS tools or Google Earth.

Finally, as applications, we will show some preliminary results obtained through the easyGeoModels software on areas characterized by huge deformation both in a volcanic context, such as that of the Campi Flegrei caldera, and a seismo-tectonic one, as for the case of the Amatrice earthquake (central Italy) which occurred on 24 August 2016.

How to cite: Solaro, G., Buonanno, S., Castaldo, R., De Luca, C., Fusco, A., Manzo, M., Pepe, S., Tizzani, P., Valerio, E., Zeni, G., Atzori, S., and Lanari, R.: EasyGeoModels: a New Tool to Investigate Seismic and Volcanic Deformations Retrieved through Geodetic Data. Software Implementation and Examples on the Campi Flegrei Caldera and the 2016 Amatrice Earthquake, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2263, https://doi.org/10.5194/egusphere-egu22-2263, 2022.

EGU22-4876 | Presentations | GI6.3 | Highlight

Geodetic imaging of the magma ascent process during the 2021 Cumbre Vieja (La Palma, Canary Islands) eruption 

Monika Przeor, José Barrancos, Raffaele Castaldo, Luca D’Auria, Antonio Pepe, Susi Pepe, Takeshi Sagiya, Giuseppe Solaro, and Pietro Tizzani

On the 11th of September of 2021, a seismic sequence began on La Palma (Canary Islands), followed by a rapid and significant ground deformation reaching more than 10 cm in the vertical component of the permanent GNSS station ARID (Aridane) operated by the Instituto Volcanológico de Canarias (INVOLCAN). The pre-eruptive episode lasted only nine days and was characterized by an intense deformation in the western part of the island and intense seismicity with the upward migration of hypocenters. After the onset of the eruption, which occurred on the 19th of September of 2021, the deformation increased a few cm more, reaching a maximum on the 22nd of September and subsequently showing a nearly steady deflation trend in the following months.

We obtained a Sentinel-1 DInSAR dataset along both ascending and descending orbits, starting from the 27th of February of 2021 and the 13th of January of 2021, respectively. We selected the study area at the radial distance of 13 km from the eruption point (Latitude: 28.612; Longitude: -17.866) to realize an inverse model of the geometry of the causative sources of the observed ground deformation. While the ascending orbit that passed on the 18th of September indicated mainly a dike intrusion in the shallow depth, the descending orbit from the 20th of September seemed to indicate a deformation caused by at least two sources: the pre-eruptive intrusion and the nearly-vertical eruptive dike. The deeper source spatially coincides with the location of most of the pre-eruptive volcano-tectonic hypocenters.

Finally, based on the preliminary inverse model of the DInSAR dataset, we applied the geodetic imaging of D’Auria et al., (2015) to retrieve the time-varying spatial distribution of volumetric ground deformation sources. The final results show the kinematics of the upward dike propagation and magma ascent.

 

References

D’Auria, L., Pepe, S., Castaldo, R., Giudicepietro, F., Macedonio, G., Ricciolino, P., ... & Zinno, I. (2015). Magma injection beneath the urban area of Naples: a new mechanism for the 2012–2013 volcanic unrest at Campi Flegrei caldera. Scientific reports, 5(1), 1-11.

How to cite: Przeor, M., Barrancos, J., Castaldo, R., D’Auria, L., Pepe, A., Pepe, S., Sagiya, T., Solaro, G., and Tizzani, P.: Geodetic imaging of the magma ascent process during the 2021 Cumbre Vieja (La Palma, Canary Islands) eruption, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4876, https://doi.org/10.5194/egusphere-egu22-4876, 2022.

EGU22-5431 | Presentations | GI6.3

Modeling Potential Impacts of Gas Exploitation on the Israeli Marine Ecosystem Using Ecopath with Ecosim 

Ella Lahav, Peleg Astrahan, Eyal Ofir, Gideon Gal, and Revital Bookman

Exploration, production, extraction and transport of fossil fuels in the marine environment are accompanied by an inherent risk to the surrounding ecosystems as a result of the on-going operations or due to technical faults, accidents or geo-hazards. Limited work has been conducted on potential impacts on the Mediterranean marine ecosystem due to the lack of information on organism responses to hydrocarbon pollution. In this study, we used the Ecopath with Ecosim (EwE) modeling software which is designed for policy evaluation and provides assessments of impacts of various stressors on an ecosystem. An existing EwE based Ecospace food-web model of the Israeli Exclusive Economic Zone (EEZ) was enhanced to include local organism response curves to various levels of contaminants, such as crude oil, in the water and on the sea floor sediments. The goal of this study is to evaluate and quantify the possible ecological impacts of pollution events that might occur due to fossil fuel exploitation related activities. Multiple spatial static and dynamic scenarios, describing various pollution quantities and a range of habitats and locations were constructed. Using the enhanced Ecospace models for assessing the potential impacts of gas exploitation on organism biomass, the spatial and temporal distribution and food-web functioning was tested and evaluated. The results of this study will show a quantitative assessment of the expected ecological impacts that could assist decision makers in developing management and conservation strategies.

How to cite: Lahav, E., Astrahan, P., Ofir, E., Gal, G., and Bookman, R.: Modeling Potential Impacts of Gas Exploitation on the Israeli Marine Ecosystem Using Ecopath with Ecosim, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5431, https://doi.org/10.5194/egusphere-egu22-5431, 2022.

EGU22-5618 | Presentations | GI6.3

Slope stability monitoring system via three-dimensional simulations of rockfalls in Ischia island, Southern Italy 

Ada De Matteo, Massimiliano Alvioli, Antonello Bonfante, Maurizio Buonanno, Raffaele Castaldo, and Pietro Tizzani

Volcanoes are dynamically active systems in continuous evolution. This behaviour is emphasized by many different processes, e.g., fumarolic activity, earthquakes, volcanic slope instabilities and volcanic climax eruptions. Volcanic edifices experience slope instability as consequence of different solicitations such as i) eruption mechanism and depositional process, ii) tectonic stresses, iii) extreme weather conditions; all these events induce the mobilization of unstable fractured volcanic flanks.

Several methods exist to gather information about slope stability and to map trajectories followed by individual falling rocks in individual slopes. These methods involve direct field observation, laser scanning, terrestrial or aerial photogrammetry. Such information is useful to infer the likely location of future rockfalls, and represent a valuable input for the application of three-dimensional models for rockfall trajectories.

The Ischia island is volcano-tectonic horst that is a part of the Phlegrean Volcanic District, Southern Italy. It covers an area of about 46 km2 and it has experienced a remarkable ground uplift events due to a resurgence phenomenon. Slope instability is correlated both with earthquakes events and with volcanism phenomena. Specifically, evidences suggest that rockfalls occurred as an effect of the gravitational instability on the major scarps generated by the rapid resurgence, eased by the widespread rock fracturing.

We present results of an analysis relevant to the most probable individual masses trajectories of rockfall affecting the slopes of Ischia island. We first identified the prospective rockfall sources through an expert-mapping of source area in sample locations and statistical analysis on the whole island. Probabilistic sources are the main input of the three-dimensional rockfalls simulation software STONE.

The software assumes point-like masses falling under the sole action of gravity and the constraints of topography, and it calculates trajectories dominated by ballistic dynamics during falling, bouncing and rolling on the ground. Analysis of high-definition critical sector pictures, achieved by using UAV (Unmanned Aerial Vehicle) platform, will allow a detailed localization of source areas and an additional more robust simulations.

The procedure can be viewed as a multiscale analysis and allows besting allocating computational efforts and economic resources, focusing on a more detailed analysis on the slopes identified as the most risky ones during the first, large-scale analysis of the whole area.

How to cite: De Matteo, A., Alvioli, M., Bonfante, A., Buonanno, M., Castaldo, R., and Tizzani, P.: Slope stability monitoring system via three-dimensional simulations of rockfalls in Ischia island, Southern Italy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5618, https://doi.org/10.5194/egusphere-egu22-5618, 2022.

EGU22-6226 | Presentations | GI6.3

The framework for improving air quality monitoring over Indian cities 

Arindam Roy, Athanasios Nenes, and Satoshi Takahama

Indian air quality monitoring guideline is directly adopted from World Health Organization (1977) guidelines without place-based modification. According to Indian air quality guidelines (2003), the location of monitoring sites should be determined from air quality modeling and previous air quality information. If such information is not available, the use of emission densities, wind data, land-use patterns and population information is recommended for prioritizing areas for air quality monitoring. The mixed land-use distribution over Indian cities and randomly distributed sources pose serious challenges, as Indian cities (unlike in other parts of the world) are characterized by a lack of distinct residential, commercial, and industrial regions, so the concept of “homogeneous emissions” (which have guided site monitoring decisions) simply does not apply. In addition, the decision-making data emission and population information, are either not available or outdated for Indian cities. Unlike the cities in Global North, the Indian urban-scape has distinguished features in terms of land use, source and population distribution which has not been addressed in air quality guidelines.

We have developed an implementable place-based framework to address the above problem of establishing effective new air quality stations in India and other regions with complex land-use patterns. Four Indian million-plus cities were selected for the present study; Lucknow, Pune, Nashik and Kanpur. We broadly classified air quality monitoring objectives into three; monitoring population exposure, measurements for compliance with the national standards and characterization of sources. Each monitoring station over four cities was evaluated and metadata has been created for each station to identify its monitoring objective for each of the stations. We find that present air quality monitoring networks are highly inadequate in characterizing average population exposure throughout each city, as current stations are predominantly located at the site of pedestrian exposure, and are not representative of the city-wide exposure.

Possible new sites for monitoring were identified using night-time light data, satellite-derived PM2.5, existing emission inventories, land-use patterns and other ancillary open-sourced data. Over Lucknow, Pune and Nashik, setting up stations at highly populated areas is recommended to fulfill the knowledge gaps on the average population exposure. Over Kanpur, it was recommended to incorporate stations to measure short-term pollution exposure in traffic and industrial sites. Rapidly developing peri-urban regions were identified using night-time light data and recommendations were provided for setting up monitoring stations in these regions.

How to cite: Roy, A., Nenes, A., and Takahama, S.: The framework for improving air quality monitoring over Indian cities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6226, https://doi.org/10.5194/egusphere-egu22-6226, 2022.

EGU22-6374 | Presentations | GI6.3

Geochemical monitoring of the Tenerife North-East and North West Rift Zones by means of diffuse degassing surveys 

Lía Pitti Pimienta, Fátima Rodríguez, María Asensio-Ramos, Gladys Melián, Daniel Di Nardo, Alba Martín-Lorenzo, Mar Alonso, Rubén García-Hernández, Víctor Ortega, David Martínez Van Dorth, María Cordero, Tai Albertos, Pedro A. Hernández, and Nemesio M. Pérez

Tenerife (2,034 km2), the largest island of the Canarian archipelago, is characterized by three volcanic rifts NW-SE, NE-SW and N-S oriented, with a central volcanic structure in the middle, Las Cañadas Caldera, hosting Teide-Pico Viejo volcanic complex. The North-West Rift-Zone (NWRZ) is one of the youngest and most active volcanic systems of the island, where three historical eruptions (Boca Cangrejo in 16th Century, Arenas Negras in 1706 and Chinyero in 1909) have occurred, whereas the North-East Rift-Zone (NERZ) is more complex than the others due to the existence of Pedro Gil stratovolcano that broke the main NE-SW structure 0.8 Ma ago. The most recent eruptive activity along the NERZ took place during 1704 and 1705 across 13 km of fissural eruption in Siete Fuentes (Arafo-Fasnia). To monitor potential volcanic activity through a multidisciplinary approach, diffuse degassing studies have been carried out since 2000 at the NWRZ (72 km2) and since 2001 at the NERZ (210 km2) in a yearly basis. Long-term variations in the diffuse CO2 output in the NWRZ have shown a temporal correlation with the onsets of seismic activity at Tenerife, supporting unrest of the volcanic system, as is also suggested by anomalous seismic activity recorded in the studied area during April, 2004 and October, 2016 (Hernández et al., 2017). In-situ measurements of CO2 efflux from the surface environment were performed according to the accumulation chamber method using a portable non-dispersive infrared (NDIR) sensor. Soil CO2 efflux values for the 2021 survey ranged between non-detectable values and 104 g·m-2·d-1, with an average value of 8 g·m-2·d-1 for NWRZ. For NERZ, soil CO2 efflux values ranged between non-detectable values and 79 g·m2·d-1, with an average value of 7 g·m-2·d-1. The probability plot technique applied to the data allowed to distinguish different geochemical populations. Background population represented 49.2% and 74.0% of the total data for NWRZ and NERZ, respectively, with a mean value (1.7 - 2.0 g·m-2·d-1) similar to the background values calculated for other volcanic systems in the Canary Islands with similar soils, vegetation and climate (Hernández et al. 2017). Peak population represented 0.9 and 0.7% for NWRZ and NERZ, respectively and with a mean value of 45 and 57 g·m-2·d-1. Soil CO2 efflux contour maps were constructed to identify spatial-temporal anomalies and to quantify the total CO2 emission using the sequential Gaussian simulation (sGs) interpolation method. Diffuse emission rate of 506 ± 22 t·d-1 for NWRZ and 1,509 ± 58 t·d-1 NERZ were obtained. The normalized CO2 emission value by area was estimated in 7.03 t·d-1·km-1 for NWRZ and in 7.2 t·d-1·km-1 for NERZ. The monitorization of the diffuse CO2 emission contributes to detect early warning signals of volcanic unrest, especially in areas where visible degassing is non-existent as in the Tenerife NWRZ and NERZ.

Hernández et al. (2017). Bull Volcanol, 79:30, DOI 10.1007/s00445-017-1109-9.

How to cite: Pitti Pimienta, L., Rodríguez, F., Asensio-Ramos, M., Melián, G., Di Nardo, D., Martín-Lorenzo, A., Alonso, M., García-Hernández, R., Ortega, V., Martínez Van Dorth, D., Cordero, M., Albertos, T., Hernández, P. A., and Pérez, N. M.: Geochemical monitoring of the Tenerife North-East and North West Rift Zones by means of diffuse degassing surveys, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6374, https://doi.org/10.5194/egusphere-egu22-6374, 2022.

Two moderate earthquakes with magnitude ML5.0 happened on 11th of November 2020 near the Mavrovo lake in northwestern Macedonia. The lake is an artificial lake with a dam built between 1947 and filled by 1953. Its maximum length is 10km, width is 5km and the depth is 50m. Given its water volume, it is possible that geological factors causing earthquakes could also affect the hydrobiological characteristics of the flow system surrounding the lake.

A list of 180 earthquakes registered by the local stations with magnitudes equal or greater than ML1.7 was analysed in terms of temporal and spatial distribution around the lake. No specific clustering of events was noticed in the foreshock period from July 2020. In the aftershock period, the most numerous events lasted about a month after the main events. However, there was another period of increased seismicity during March 2021, followed by gradual decrease onwards. The distribution of epicentres was mainly along the terrain of Radika river and a few smaller tributaries to the lake system.

A comparative analysis was done with the dataset collected by the program run at the department of Biology at the Faculty of Natural Sciences, University UKIM in Skopje. Environmental investigations in Europe have shown stress reactions of hydrobionts in respect to water temperature and heavy metal pollution, for example the influence of radioactive radiation. Earthquake-induced seismic changes most often affect the chemical-physical properties of water quality and temperature stratification, i.e., mixing of water masses. In our research, we analyse for the first time the relationship between the seismological activities in the Jul 2020-Nov 2021 period in details and a possible impact to environment thru the population of macrozoobenthos from Mavrovo Lake.

How to cite: Sinadinovski, C. and Smiljkov, S.: Numerical analysis of Seismic and Hydrobiological data around lake Mavrovo in the period Jul.2020-Nov.2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6452, https://doi.org/10.5194/egusphere-egu22-6452, 2022.

EGU22-6468 | Presentations | GI6.3

Measuring greenhouse gas fluxes – what methods do we have versus what methods do we need? 

David Bastviken, Julie Wilk, Nguyen Thanh Duc, Magnus Gålfalk, Martin Karlson, Tina Neset, Tomasz Opach, Alex Enrich Prast, and Ingrid Sundgren

Appropriate methods to measure greenhouse gas (GHG) fluxes are critical for our ability to detect fluxes, understand regulation, make adequate priorities for climate change mitigation efforts, and verify that these efforts are effective. Ideally, we need reliable, accessible, and affordable measurements at relevant scales. We surveyed present GHG flux measurement methods, identified from an analysis of >11000 scientific publications and a questionnaire to sector professionals and analysed method pros and cons versus needs for novel methodology. While existing methods are well-suited for addressing certain questions, this presentation presents fundamental limitations relative to GHG flux measurement needs for verifiable and transparent action to mitigate many types of emissions. Cost and non-academic accessibility are key aspects, along with fundamental measurement performance. These method limitations contribute to the difficulties in verifying GHG mitigation efforts for transparency and accountability under the Paris agreement. Resolving this mismatch between method capacity and societal needs is urgently needed for effective climate mitigation. This type of methodological mismatch is common but seems to get high priority in other knowledge domains. The obvious need to prioritize development of accurate diagnosis methods for effective treatments in healthcare is one example. This presentation provides guidance regarding the need to prioritize the development of novel GHG flux measurement methods.

How to cite: Bastviken, D., Wilk, J., Duc, N. T., Gålfalk, M., Karlson, M., Neset, T., Opach, T., Enrich Prast, A., and Sundgren, I.: Measuring greenhouse gas fluxes – what methods do we have versus what methods do we need?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6468, https://doi.org/10.5194/egusphere-egu22-6468, 2022.

EGU22-8458 | Presentations | GI6.3

Temporal evolution of dissolved gases in groundwater of Tenerife Island 

Cecilia Amonte, Nemesio M. Pérez, Gladys V. Melián, María Asensio-Ramos, Eleazar Padrón, Pedro A. Hernández, and Ana Meire Feijoo

The oceanic active volcanic island of Tenerife (2,034 km2) is the largest of the Canarian archipelago. There are more than 1,000 galleries (horizontal drillings) in the island, which are used for groundwater exploitation and allow reaching the aquifer at different depths and elevations. This work presents the first extensive study on the temporal variation of dissolved gases in groundwaters from Fuente del Valle and San Fernando galleries (Tenerife, Spain) since April 2016 to June 2020. This investigation is focused on the chemical and isotopic content of several dissolved gas species (CO2, He, O2, N2 and CH4) present in the groundwaters and its relationship with the seismic activity registered in the island. The results show CO2 as the major dissolved gas specie in the groundwater from both galleries presenting a mean value of 260 cm3STP·L-1 and 69 cm3STP·L-1 for Fuente del Valle and San Fernando, respectively. The average δ13C-CO2 data (-3.9‰ for Fuente del Valle and -6.4‰ for San Fernando) suggest a clear endogenous origin as result of interaction of them with deep-origin fluid. A bubbling gas sample from Fuente del Valle gallery was analysed, obtaining a CO2 rich gas (87 Vol.%) with a considerable He enrichment (7.3 ppm). The isotopic data of both components in the bubbling gas support the results obtained in the dissolved gases, showing an endogenous component that could be affected by the different activity of the hydrothermal system. During the study period, an important seismic swarm occurred on October 2, 2016, followed by an increase of the seismic activity in and around Tenerife. After this event, important geochemical variations were registered in the dissolved gas species, such as dissolved CO2 and He content and the CO2/O2, He/CO2, He/N2 and CH4/CO2 ratios. These findings suggest an injection of fluids into the hydrothermal system during October 2016, a fact that evidences the connection between the groundwaters and the hydrothermal system. The present work demonstrates the importance of dissolved gases studies in groundwater for volcanic surveillance.

How to cite: Amonte, C., Pérez, N. M., Melián, G. V., Asensio-Ramos, M., Padrón, E., Hernández, P. A., and Meire Feijoo, A.: Temporal evolution of dissolved gases in groundwater of Tenerife Island, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8458, https://doi.org/10.5194/egusphere-egu22-8458, 2022.

Land surface temperature (LST) is a manifestation of the surface thermal environment (LSTE) and an important driver of physical processes of surface land energy balance at local to global scales. Tenerife is one of the most heterogeneous islands among the Canaries from a climatological and bio-geographical point of view. We study the surface thermal conditions of the volcanic island with remote sensing techniques. In particular, we consider a time series of Landsat 8 (L8) level 2A images for the period 2013 to 2019 to estimate LST from surface reflectance (SR) and brightness Temperature (BT) images. A total of 26 L8 dates were selected based on cloud cover information from metadata (land cloud cover < 10%) to estimate pixel-level LST with an algorithm based on Radiative Transfer Equations (RTE). The algorithm relies on the Normalized Difference Vegetation Index (NDVI) for estimating emissivity pixel by pixel. We apply the Independent Component Analysis (ICA) that revealed to be a powerful tool for data mining and, in particular, to separate multivariate LST dataset into a finite number of components, which have the maximum relative statistical independence. The ICA allowed separating the land surface temperature time series of Tenerife into 11 components that can be associated with geographic and bioclimatic zones of the island. The first ten components are related to physical factors, the 11th component, on the contrary, presented a more complex pattern resulting possibly from its small amplitude and the combination of various factors into a single component. The signal components recognized with the ICA technique, especially in areas of active volcanism, could be the basis for the space-time monitoring of the endogenous component of the LST due to surface hydrothermal and/or geothermal activity. Results are encouraging, although the 16-day revisit frequency of Landsat reduces the frequency of observation that could be increased by applying techniques of data fusion of medium and coarse spatial resolution images. The use of such systems for automatic processing and analysis of thermal images may in the future be a fundamental tool for the surveillance of the background activity of active and dormant volcanoes worldwide.

How to cite: Stroppiana, D., Przeor, M., D’Auria, L., and Tizzani, P.: Analysis of thermal regimes at Tenerife(Canary Islands) with Independent Component Analysis applied to time series of Remotely Sensed Land Surface Temperatures, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8580, https://doi.org/10.5194/egusphere-egu22-8580, 2022.

EGU22-9376 | Presentations | GI6.3

An IoT based approach to ultra high resolution air quality mapping thorigh field calibrated monitoring devices 

Saverio De Vito, Grazia Fattoruso, and Domenico Toscano

Recent advances in IoT and chemical sensors calibration technologies have led to the proposal of Hierarchical air quality monitoring networks. They are indeed complex systems relying on sensing nodes which differs from size, cost, accuracy, technology, maintenance needs while having the potential to empower smart cities and communiities with increased knowledge  on the highly spatiotemporal variance Air Quality phenomenon (see [1]). The AirHeritage project, funded by Urban Innovative Action program have developed and implemented a hierarchical monitoring system which allows for offering real time assessments and model based forecasting services including 7 fixed low cost sensors station, one (mobile and temporary located) regulatory grade analyzer and a citizen science based ultra high resolution AQ mapping tool based on field calibrated mobile analyzers. This work will analyze the preliminary results of the project by focusing on the machine learning driven sensors calibration methodology and citizen science based air quality mapping campaigns. Thirty chemical and particulate matter multisensory devices have been deployed in Portici, a 4Km2 city located 7 km south of Naples which is  affected by significant car traffic. The devices have been  entrusted to local citizens association for implementing 1 preliminary validation campaign (see [2]) and 3 opportunistic 2-months duration monitoring campaigns. Each 6 months, the devices undergoes a minimum 3 weeks colocation period with a regulatory grade analyzer allowing for training and validation dataset building. Multilinear regression sw components are trained to reach ppb level accuracy (MAE <10ug/m^3 for NO2 and O3, <15ug/M^3 for PM2.5 and PM10, <300ug/M^3 for CO) and encoded in a companion smartphone APP which allows the users for real time assessment of personal exposure. In particular, a novel AQI strongly based on European Air Quality Index ([3]) have been developed for AQ real time data communication. Data have been collected using a custom IoT device management platform entrusted with inception, storage and data-viz roles. Finally data have been used to build UHR (UHR) AQ maps, using spatial binning approach (25mx25m) and median computation for each bin receiving more than 30 measurements during the campaign. The resulting maps have hown the possibility to allow for pinpointing city AQ hotpots which will allows fact-based remediation policies in cities lacking objective technologies to locally assess concentration exposure.  

 

[1] Nuria Castell et Al., Can commercial low-cost sensor platforms contribute to air quality monitoring and exposure estimates?, Environment International, Volume 99, 2017, Pages 293-302 ISSN 0160-4120, https://doi.org/10.1016/j.envint.2016.12.007.

[2] De Vito, S, et al., Crowdsensing IoT Architecture for Pervasive Air Quality and Exposome Monitoring: Design, Development, Calibration, and Long-Term Validation. Sensors 202121, 5219. https://doi.org/10.3390/s21155219

[3] https://airindex.eea.europa.eu/Map/AQI/Viewer/

How to cite: De Vito, S., Fattoruso, G., and Toscano, D.: An IoT based approach to ultra high resolution air quality mapping thorigh field calibrated monitoring devices, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9376, https://doi.org/10.5194/egusphere-egu22-9376, 2022.

EGU22-10290 | Presentations | GI6.3

Soil gas Rn monitoring at Cumbre Vieja prior and during the 2021 eruption, La Palma, Canary Islands 

Daniel Di Nardo, Eleazar Padrón, Claudia Rodríguez-Pérez, Germán D. Padilla, José Barrancos, Pedro A. Hernández, María Asensio-Ramos, and Nemesio M. Pérez

Cumbre Vieja volcano (La Palma, Canary Islands, Spain) suffered a volcanic eruption that started on September 19 and finished on December 13, 2021. The eruption is considered the longest volcanic event since data are available on the island: it finished after 85 days and 8 hours of duration and 1,219 hectares of lava flows. La Palma Island is the fifth in extension (706 km2) and the second in elevation (2,423 m a.s.l.) of the Canarian archipelago. Cumbre Vieja volcano, where the volcanic activity has taken place exclusively in the last 123 ka, forms the sand outhern part of the island. In 2017, two remarkable seismic swarms interrupted a seismic silence of 46 years in Cumbre Vieja volcano with earthquakes located beneath Cumbre Vieja volcano at depths ranging between 14 and 28 km with a maximum magnitude of 2.7. Five additional seismic swarms were registered in 2020 and four in 2021. The eruption started ~1 week after the start of the last seismic swarm.

As part of the INVOLCAN volcano monitoring program of Cumbre Vieja, soil gas radon (222Rn) and thoron (220Rn) is being monitored at five sites in Cumbre Vieja using SARAD RTM2010-2 RTM 1688-2 portable radon monitors. 222Rn and 220Rn are two radioactive isotopes of radon with a half-life of 3.8 days and 54.4 seconds, respectively. Both isotopes can diffuse easily trough the soil and can be detected at very low concentrations, but their migration in large scales, ten to hundreds of meters, is supported by advection (pressure changes) and is related to the existence of a carrier gas source (geothermal fluids or fluids linked to magmatic and metamorphic phenomena), and to the existence of preferential routes for degassing (deep faults). Previous results on the monitoring of soil Rn in the Canary Islands with volcano monitoring purposes are promising (Padilla et al, 2013).     

The most remarkable result of the Rn monitoring network of Cumbre Vieja was observed in LPA01 station, located at the north-east of Cumbre Vieja. Since mid-March 2021, soil 222Rn activity experienced a sustained until reaching maximum values of ~1.0E+4 222Rn Bq/m3 days before the eruption onset. During the eruptive period, soil 222Rn activity showed a gradual decreasing trend. The increase of magmatic-gas pressure due to magma movement towards the surface and the transport of anomalous 222Rn originated from hydrofracturing of rock, from direct magma degassing or from both, is the most plausible explanation for the increases in radon activity before the eruption onset observed at LPA01. As soil gas radon activity increased prior to the eruption onset, this monitoring technique can be efficiently used as an initial warning sign of the pressurization of magma beneath La Palma Island.

Padilla, G. D., et al. (2013), Geochem. Geophys. Geosyst., 14, 432–447, doi:10.1029/2012GC004375.

 

How to cite: Di Nardo, D., Padrón, E., Rodríguez-Pérez, C., Padilla, G. D., Barrancos, J., Hernández, P. A., Asensio-Ramos, M., and Pérez, N. M.: Soil gas Rn monitoring at Cumbre Vieja prior and during the 2021 eruption, La Palma, Canary Islands, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10290, https://doi.org/10.5194/egusphere-egu22-10290, 2022.

EGU22-10603 | Presentations | GI6.3 | Highlight

The "Campania Trasparente" multiscale and multimedia monitoring project: an unprecedented experience in Italy. 

Stefano Albanese, Annamaria Lima, Annalise Guarino, Chengkai Qu, Domenico Cicchella, Mauro Esposito, Pellegrino Cerino, Antonio Pizzolante, and Benedetto De Vivo

In 2015, the "Campania Trasparente" project (http://www.campaniatrasparente.it), a monitoring plan focused on assessing the environmental conditions of the territory of the Campania region, started thanks to the financial support of the regional government. The project's general management was in charge of the Experimental Zooprophylactic Institute of Southern Italy (IZSM).
In the project framework, the collection and analysis of many environmental and biological samples (including soil and air and human blood specimen) were completed. The primary aim of the whole project was to explore the existence of a link between the presence of some illnesses in the local population and the status of the environment and generate a reliable database to assess local foodstuff healthiness.
Six research units were active in the framework of the project. As for soil and air, the Environmental Geochemistry Working Group (EGWG) at the Department of Earth, Environment and Resources Sciences, University of Naples Federico II, was in charge of most of the research activities. Specifically, the EGWG completed the elaboration of the data on potentially toxic metals/metalloids (PTMs) and organic contaminants (PAHs, OCPs, Dioxins) in the regional soils and air.
The monitoring of air contaminants lasted more than one year, and it was completed employing passive air samplers (PAS) and deposimeters spread across the whole region.
Three volumes were published, including statistical elaborations and geochemical maps of all the contaminants analysed to provide both the regional government and local scientific and professional community with a reliable tool to approach local environmental problems starting from a sound base of knowledge.
Geochemical distribution patterns of potentially toxic elements (PTEs), for example, were used to establish local geochemical background/baseline intervals for those metals (naturally enriched in regional soils) found to systematically overcome the national environmental guidelines (set by the Legislative Decree 152/2006).
Data from the air, analysed in terms of concentration and time variation, were, instead, fundamental to discriminate the areas of the regional territory characterised by heavy contamination associated with the emission of organic compounds from anthropic sources.

The integration of all the data generated within the "Campania Trasparente" framework, including the data proceeding from the Susceptible Population Exposure Study (SPES), focusing on human biomonitoring (based on blood), allowed the development of a regional-wide conceptual model to be used as a base to generate highly specialised risk assessments for regional population and local communities affected by specific environmental problems.

How to cite: Albanese, S., Lima, A., Guarino, A., Qu, C., Cicchella, D., Esposito, M., Cerino, P., Pizzolante, A., and De Vivo, B.: The "Campania Trasparente" multiscale and multimedia monitoring project: an unprecedented experience in Italy., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10603, https://doi.org/10.5194/egusphere-egu22-10603, 2022.

EGU22-10659 | Presentations | GI6.3

Long-term variations of diffuse CO2, He and H2 at the summit crater of Teide volcano, Tenerife, Canary Islands during 1999-2021 

Germán D. Padilla, Fátima Rodríguez, María Asensio-Ramos, Gladys V. Melián, Mar Alonso, Alba Martín-Lorenzo, Beverley C. Coldwell, Claudia Rodríguez, Jose M. Santana de León, Eleazar Padrón, José Barrancos, Luca D'Auria, Pedro A. Hernández, and Nemesio M. Pérez

Tenerife Island (2,034 km2) is the largest island of the Canarian archipelago. Its structure is controlled by a volcano-tectonic rift-system with NW, NE and NS directions, with the Teide-Pico Viejo volcanic system located in the intersection. Teide is 3,718 m.a.s.l. high and its last eruption occurred in 1798 through an adventive cone of Teide-Pico Viejo volcanic complex. Although Teide volcano shows a weak fumarolic system, volcanic gas emissions observed in the summit cone consist mostly of diffuse CO2 degassing.

 

In this study we investigate the Teide-Pico Viejo volcanic system evolution using a comprehensive diffuse degassing geochemical dataset 216 geochemical surveys have been performed during the period 1999-2021 at the summit crater of Teide Volcano covering an area of 6,972 m2. Diffuse CO2 emission was estimated in 38 sampling sites, homogeneously distributed inside the crater, by means of a portable non dispersive infrared (NDIR) CO2 fluxmeter using the accumulation chamber method. Additionally, soil gases were sampled at 40 cm depth using a metallic probe with a 60 cc hypodermic syringe and stored in 10 cc glass vials and send to the laboratory to analyse the He and H2 content by means of quadrupole mass spectrometry and micro-gas chromatography, respectively. To estimate the He and H2 emission rates at each sampling point, the diffusive component was estimated following the Fick’s law and the convective emission component model was estimated following the Darcy’s law. In all cases, spatial distribution maps were constructed averaging the results of 100 simulations following the sequential Gaussian simulation (sGs) algorithm, in order to estimate CO2, He and H2 emission rates.

 

During 22 years of the studied period, CO2 emissions ranged from 2.0 to 345.9 t/d, He emissions between 0.013 and 4.5 kg/d and H2 between 1.3 and 64.4 kg/d. On October 2, 2016, a seismic swarm of long-period events was recorded on Tenerife followed by an increase of the seismic activity in and around the island (D’Auria et al., 2019; Padrón et al., 2021). Several geochemical parameters showed significant changes during ∼June–August of 2016 and 1–2 months before the occurrence of the October 2, 2016, long-period seismic swarm (Padrón et al., 2021). Diffuse degassing studies as useful to conclude that the origin of the 2 October 2016 seismic swarm an input of magmatic fluids triggered by an injection of fresh magma and convective mixing. Thenceforth, relatively high values have been obtained in the three soil gases species studied at the crater of Teide, with the maximum emission rates values registered during 2021. This increase reflects a process of pressurization of the volcanic-hydrothermal system. This increment in CO2, He and H2 emissions indicate changes in the activity of the system and can be useful to understand the behaviour of the volcanic system and to forecast future volcanic activity. Monitoring the diffuse degassing rates has demonstrated to be an essential tool for the prediction of future seismic–volcanic unrest, and has become important to reduce volcanic risk in Tenerife.

D'Auria, L., et al. (2019). J. Geophys. Res.124,8739-8752

Padrón, E., et al., (2021). J. Geophys. Res.126,e2020JB020318

How to cite: Padilla, G. D., Rodríguez, F., Asensio-Ramos, M., Melián, G. V., Alonso, M., Martín-Lorenzo, A., Coldwell, B. C., Rodríguez, C., Santana de León, J. M., Padrón, E., Barrancos, J., D'Auria, L., Hernández, P. A., and Pérez, N. M.: Long-term variations of diffuse CO2, He and H2 at the summit crater of Teide volcano, Tenerife, Canary Islands during 1999-2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10659, https://doi.org/10.5194/egusphere-egu22-10659, 2022.

EGU22-11493 | Presentations | GI6.3

Analysis and Modelling of 2009-2013 Unrest Episodes at Campi Flegrei Caldera 

Raffaele Castaldo, Giuseppe Solaro, and Pietro Tizzani

Geodetic modelling is a valuable tool to infer volume and geometry of volcanic source system; it represents a key procedure for detecting and characterizing unrest and eruption episodes. In this study, we analyse the 2009–2013 uplift phenomenon at Campi Flegrei (CF) caldera in terms of spatial and temporal variations of the stress/strain field due to the effect of the retrieved inflating source. We start by performing a 3D stationary finite element (FE) modelling of geodetic datasets to retrieve the geometry and location of the deformation source. The geometry of FE domain takes into account both the topography and the bathymetry of the whole caldera. For what concern the definition of domain elastic parameters, we take into account the Vp/Vs distribution from seismic tomography. We optimize our model parameters by exploiting two different geodetic datasets: the GPS data and DInSAR measurements. The modelling results suggest that the best-fit source is a three-axis oblate spheroid ~3 km deep, similar to a sill-like body. Furthermore, in order to verify the reliability of the geometry model results, we calculate the Total Horizontal Derivative (THD) of the vertical velocity component and compare it with those performed with the DInSAR measurements. Subsequently, starting from the same FE modelling domain, we explore a 3D time-dependent FE model, comparing the spatial and temporal distribution of the shear stress and volumetric strain with the seismic swarms beneath the caldera. Finally, We found that low values of shear stress are observed corresponding with the shallow hydrothermal system where low-magnitude earthquakes occur, whereas high values of shear stress are found at depths of about 3 km, where high-magnitude earthquakes nucleate.

How to cite: Castaldo, R., Solaro, G., and Tizzani, P.: Analysis and Modelling of 2009-2013 Unrest Episodes at Campi Flegrei Caldera, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11493, https://doi.org/10.5194/egusphere-egu22-11493, 2022.

EGU22-11874 | Presentations | GI6.3

Time evolution of Land Surface Temperature (LST) in active volcanic areas detected via integration of satellite and ground-based measurements: the Campi Flegrei caldera (Southern Italy) case study. 

Andrea Barone, Daniela Stroppiana, Raffaele Castaldo, Stefano Caliro, Giovanni Chiodini, Luca D'Auria, Gianluca Gola, Ferdinando Parisi, Susi Pepe, Giuseppe Solaro, and Pietro Tizzani

Thermal features of environmental systems are increasingly investigated after the development of remote sensing technologies; the increasing availability of Earth Observation (EO) missions allows the retrieval of the Land Surface Temperature (LST) parameter, which is widely used for a large variety of applications (Galve et al., 2018). In volcanic environment, the LST is an indicator of the spatial distribution of thermal anomalies at the ground surface, supporting designed tools for monitoring purposes (Caputo et al., 2019); therefore, LST can be used to understand endogenous processes and to model thermal sources.

In this framework, we present the results of activities carried out in the FLUIDs PRIN project, which aims at the characterization and modeling of fluids migration at different scales (https://www.prinfluids.it/). We propose a multi-scale analysis of thermal data at Campi Flegrei caldera (CFc); this area is well known for hosting thermal processes related to both magmatic and hydrothermal systems (Chiodini et al., 2015; Castaldo et al., 2021). Accordingly, data collected at different scales are suitable to search out local thermal trends with respect to regional ones. In particular, in this work we compare LST estimated from Landsat satellite images covering the entire volcanic area and ground measurements nearby the Solfatara crater.

Firstly, we exploit Landsat data to derive time series of LST by applying an algorithm based on Radiative Transfer Equations (RTE) (Qin et al., 2001; Jimenez-Munoz et al., 2014). The algorithm exploits both thermal infrared (TIR) and visible/near infrared (VIS/NIR) bands of different Landsat missions in the period 2000-2021; we used time series imagery from Landsat 5 (L5), Landsat 7 (L7) and Landsat 8 (L8) satellite missions to retrieve the thermal patterns of the CFc area with spatial resolutions of 30 m for VIS/NIR bands and 60 m to 120 m for TIR bands. Theoretical frequency of acquisition of the Landsat missions is 16 days that is reduced over the study area by cloud cover: Landsat images with high cloud cover were in fact discarded from the time series.

In particular, we process both the daily acquisitions as well nighttime data to provide thermal features at the ground surface in the absence of solar radiation. To emphasize the thermal anomalies of endogenous phenomena, the retrieved LST time-series are corrected following these steps: (i) removal of spatial and temporal outliers; (ii) correction for adiabatic gradient of the air with the altitude; (iii) detection and removal of the seasonal component.

Regarding to the ground-based acquisitions, we consider the data collected by the Osservatorio Vesuviano, National Institute of Geophysics and Volcanology (OV- INGV, Italy, Naples); the dataset consists of 151 thermal measurements distributed within the 2004-2021 time-interval and acquired inside the Solfatara and Pisciarelli areas at a depth of 0.01 m below the ground surface. Similarly, we process this dataset following corrections (i) and (iii).

Finally, we compare the temporal evolution of thermal patterns retrieved by the satellite and ground-based measurements, highlighting the supporting information provided by LST and its integration with data at ground.

How to cite: Barone, A., Stroppiana, D., Castaldo, R., Caliro, S., Chiodini, G., D'Auria, L., Gola, G., Parisi, F., Pepe, S., Solaro, G., and Tizzani, P.: Time evolution of Land Surface Temperature (LST) in active volcanic areas detected via integration of satellite and ground-based measurements: the Campi Flegrei caldera (Southern Italy) case study., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11874, https://doi.org/10.5194/egusphere-egu22-11874, 2022.

EGU22-11990 | Presentations | GI6.3

Integrating geophysical, geochemical, petrological and geological data for the thermal and rheological characterization of unconventional geothermal fields: the case study of Long Valley Caldera 

Gianluca Gola, Andrea Barone, Raffaele Castaldo, Giovanni Chiodini, Luca D'Auria, Rubén García-Hernández, Susi Pepe, Giuseppe Solaro, and Pietro Tizzani

We propose a novel multidisciplinary approach to image the thermo-rheological stratification beneath active volcanic areas, such as Long Valley Caldera (LVC), which hosts a magmatic-hydrothermal system. Geothermal facilities near the Casa Diablo locality supply 40 MWe from three binary power plants, exploiting about 850 kg s−1 of 160–180 °C water that circulates within the volcanic sediments 200 to 350 meters deep. We performed a thermal fluid dynamic model via optimization procedure of the thermal conditions of the crust. We characterize the topology of the hot magmatic bodies and the hot fluid circulation (the permeable fault-zones), using both a novel imaging of the a and b parameters of the Gutenberg-Richter law and an innovative procedure analysis of P-wave tomographic models. The optimization procedure provides the permeability of a reservoir (5.0 × 10−14 m2) and of the fault-zone (5.0 · 10−14 – 1.0 × 10−13 m2), as well as the temperature of the magma body (750–800°C). The imaging of the rheological properties of the crust indicates that the brittle/ductile transition occurs about 5 km b.s.l. depth, beneath the resurgent dome. There are again deeper brittle conditions about 15 km b.s.l., agreeing with the previous observations. The comparison between the conductive and the conductive-convective heat transfer models highlights that the deeper fluid circulation efficiently cools the volumes above the magmatic body, transferring the heat to the shallow geothermal system. This process has a significant impact on the rheological properties of the upper crust as the migration of the B/D transition. Our findings show an active magmatic system (6–10 km deep) and confirm that LVC is a long-life silicic caldera system. Furthermore, the occurrence of deep-seated, super-hot geothermal resources 4.5 – 5.0 km deep, possibly in supercritical conditions, cannot be ruled out.

How to cite: Gola, G., Barone, A., Castaldo, R., Chiodini, G., D'Auria, L., García-Hernández, R., Pepe, S., Solaro, G., and Tizzani, P.: Integrating geophysical, geochemical, petrological and geological data for the thermal and rheological characterization of unconventional geothermal fields: the case study of Long Valley Caldera, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11990, https://doi.org/10.5194/egusphere-egu22-11990, 2022.

EGU22-12331 | Presentations | GI6.3 | Highlight

The evaluation of soil organic carbon through VIS-NIR spectroscopy to support the soil health monitoring 

Haitham Ezzy, Anna Brook, Claudio Ciavatta, Francesca Ventura, Marco Vignudelli, and Antonello Bonfante

Increasing the organic matter content of the soil has been presented in the:”4per1000″ proposal as a significant climate mitigation measure able to support the achievement of Sustainable Development Goal 13 - Climate Action of United Nations.

At the same time, the report of the Mission Board for Soil health and Food, "Caring for soil is caring for life," indicates that one of the targets that must be reached by 2030 is the conservation and increase of soil organic carbon stock.  De facto, the panel clearly indicates the soil organic carbon as one of the indicators that can be used to monitor soil health, and at the same time, if the current soil use is sustainable or not.

Thus it is to be expected that the monitoring of SOC will become requested to check and monitor the sustainability of agricultural practices realized in the agricultural areas. For all the above reasons, the development of a reliable and fast indirect methods to evaluate the SOC is necessary to support different stakeholders (government, municipality, farmer) to monitor SOC at different spatial scales (national, regional, local).

Over the past two decades, data mining approaches in spatial modeling of soil organic carbon using machine learning techniques and artificial neural network (ANN) to investigate the amount of carbon in the soil using remote sensing data has been widely considered. Accordingly, this study aims to design an accurate and robust neural network model to estimate the soil organic carbon using the data-based field-portable spectrometer and laboratory-based visible and near-infrared (VIS/NIR, 350−2500 nm) spectroscopy of soils. The measurements will be on two sets of the same soil samples, the first by the standard protocol of requested laboratories for soil scanning, The second set of the soil samples without any cultivation to simulate the soil condition in the sampling field emphasizes the predictive capabilities to achieve fast, cheap and accurate soil status. Carbon soil parameter will determine using, multivariate regression method used for prediction with Least absolute shrinkage and selection operator regression (Lasso) in interval way (high, medium, and low). The results will increase accuracy, precision, and cost-effectiveness over traditional ex-situ methods.

The contribution has been realized within the international EIT Food project MOSOM (Mapping of Soil Organic Matter; https://www.eitfood.eu/projects/mosom)

How to cite: Ezzy, H., Brook, A., Ciavatta, C., Ventura, F., Vignudelli, M., and Bonfante, A.: The evaluation of soil organic carbon through VIS-NIR spectroscopy to support the soil health monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12331, https://doi.org/10.5194/egusphere-egu22-12331, 2022.

EGU22-12364 | Presentations | GI6.3

Stromboli Volcano observations through the Airborne X-band Interferometric SAR (AXIS) system 

Paolo Berardino, Antonio Natale, Carmen Esposito, Gianfranco Palmese, Riccardo Lanari, and Stefano Perna

Synthetic Aperture Radar (SAR) represents nowadays a well-established tool for day and night and all-weather microwave Earth Oobservation (EO) [1]. In last decades, a number of procedures EO techniques based on SAR data have been indeed devised developed for investigating several natural and anthropic phenomena the monitoring of affecting our planet. Among these, SAR Interferometry (InSAR) and Differential SAR Interferometry (DInSAR) undoubtedly represent a powerful techniques to characterize the deformation processes associated to several natural phenomena, such as eEarthquakes, landslides, subsidences andor volcanic unrest events [2] - [4].

In particular, such techniques can benefit of the operational flexibility offered by airborne SAR systems, which allow us to frequently monitor fast-evolving phenomena, timely reach the region of interest in case of emergency, and observe the same scene under arbitrary flight tracks.

In this work, we present the results relevant to multiple radar surveys carried out over the Stromboli Island, in Italy, through the Italian Airborne X-band Interferometric SAR (AXIS) system. The latter is based on the Frequency Modulated Continuous Wave (FMCW) technology, and is equipped with a three-antenna single-pass interferometric layout [5].

The considered dataset has been collected during three different acquisition campaigns, carried out from July 2019 to June 2021, and consists of radar data acquired along four flight directions (SW-NE, NW-SE, NE-SW, SE-NW), as to describe flight circuits around the island and to illuminate the Stromboli volcano under different points of view.

References

[1] Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, K. P. Papathanassiou, “A tutorial on Synthetic Aperture Radar”, IEEE Geoscience and Remote Sensing Magazine, pp. 6-43, March 2013.

[2] Bamler, R., Hartl, P., 1998. Synthetic Aperture Radar Interferometry. Inverse problems, 14(4), R1.

[3] P. Berardino, G. Fornaro, R. Lanari and E. Sansosti, “A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms”, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002.

[4] R. Lanari, M. Bonano, F. Casu, C. De Luca, M. Manunta, M. Manzo, G. Onorato, I. Zinno, “Automatic Generation of Sentinel-1 Continental Scale DInSAR Deformation Time Series through an Extended P-SBAS Processing Pipeline in a Cloud Computing Environment”, Remote Sensing, 2020, 12, 2961.

[5] C. Esposito, A. Natale, G. Palmese, P. Berardino, R. Lanari, S. Perna, “On the Capabilities of the Italian Airborne FMCW AXIS InSAR System”, Remote Sens. 2020, 12, 539.

 

How to cite: Berardino, P., Natale, A., Esposito, C., Palmese, G., Lanari, R., and Perna, S.: Stromboli Volcano observations through the Airborne X-band Interferometric SAR (AXIS) system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12364, https://doi.org/10.5194/egusphere-egu22-12364, 2022.

EGU22-12927 | Presentations | GI6.3 | Highlight

FRA.SI.project - AN INTEGRATED MULTI-SCALE METHODOLOGIES FOR THE ZONATION OF LANDSLIDE-INDUCED HAZARD IN ITALY 

Pietro Tizzani, Paola Reichenbach, Federica Fiorucci, Massimiliano Alvioli, Massimiliano Moscatelli, and Antonello Bonfante and the Fra.Si. Team

Fra. Si. a national research project supported by the Ministry of the Environment and Land and Sea Protection, develops a coherent set of multiscale methodologies for the assessment and zoning of earthquake-induced landslide hazards. To achieve the goal, the project operates at different geographical, temporal, and organizational scales, and in different geological, geomorphological, and seismic-tectonic contexts. Given the complexity, variability, and extent of earthquake-induced landslides in Italy, operating at multiple scales allows you to (a) maximize the use of available data and information; (b) propose methodologies and experiment with models that operate at different scales and in different contexts, exploiting their peculiarities at the most congenial scales and coherently exporting the results at different scales; and (c) obtain results at scales of interest for different users.

The project defines a univocal and coherent methodological framework for the assessment and zoning of earthquake-induced landslide hazard, integrating existing information and data on earthquake-induced landslide in Italy, available to proponents, available in technical literature and from "open" sources - in favor of the cost-effectiveness of the proposal. The integration exploits a coherent set of modeling tools, expert (heuristic) and numerical (statistical and probabilistic, physically-based, FEM, optimization models). The methodology considers the problem at multiple scales, including: (a) three geographic scales - the national synoptic scale, the regional mesoscale and the local scale; (b) two time scales - the pre-event scale typical of territorial planning and the deferred time of civil protection, and the post-event scale, characteristic of real civil protection time; and (c) different organizational and management scales - from spatial planning and soil defense, including post-seismic reconstruction, to civil protection rapid response. Furthermore, the methodology considers the characteristics of the seismic-induced landslide and the associated hazard in the main geological, geomorphological and seismic-tectonic contexts in Italy.

The project develops methodologies and products for different users and/or users. The former concern methodologies for (i) the synoptic zoning of the hazard caused by earthquake-induced landslides in Italy; (ii) the zoning and quantification of the danger from earthquake-induced landslides on a regional scale; (iii) the quantification of the danger of single deep landslides in the seismic phase; and for (iv) the identification and geological-technical modeling of deep co-seismic landslides starting from advanced DInSAR analyzes from post-seismic satellites.

How to cite: Tizzani, P., Reichenbach, P., Fiorucci, F., Alvioli, M., Moscatelli, M., and Bonfante, A. and the Fra.Si. Team: FRA.SI.project - AN INTEGRATED MULTI-SCALE METHODOLOGIES FOR THE ZONATION OF LANDSLIDE-INDUCED HAZARD IN ITALY, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12927, https://doi.org/10.5194/egusphere-egu22-12927, 2022.

EGU22-629 | Presentations | GI6.1

Utilizing Hyperspectral Remote Sensing to Detect Concentration of Cyanobacteria in Freshwater Ecosystems 

Jalissa Pirro, Christopher Thomas, Cameron Wallace, Zoe Alpert, Madison Tuohy, Timothy de Smet, Kiyoko Yokota, Patrick Jackson, Lisa Cleckner, Courtney Wigdahl-Perry, Kelly Young, Kely Amejecor, and Austin Scheer

Harmful algal blooms (HABs) are a threat to freshwater quality, public health, and aquatic ecosystems. The economic losses suffered by the agricultural, fishing, and tourism industries as a result of HABs exceed billions of dollars worldwide annually, with cleanup costs from local and national governments reaching a similar price. Current manual field-based sampling methods followed by laboratory analysis to detect and monitor HABs in expensive, labor-intensive, and slow, delaying critical management decisions. Moreover, current detection methods have limited success documenting HABs in freshwater bodies and such attempts employ satellite-based multispectral remote sensing; however, satellite-based methods are limited by cost, low spatial and spectral resolution, and restricting temporal windows for on-demand revisits. Our study used relatively low-cost unpiloted aerial systems (UAS) and hyperspectral sensors to detect HABs with higher resolution while having the capacity to conduct near real-time detection. Additionally, our hyperspectral remote sensing can detect and differentiate between HABs that produce cyanobacteria and other chlorophyll-producing plants. We detected a spectral peak of 710 nm that is characteristic of cyanobacteria producing HABs. Principal components analysis (PCA) was useful to spatially highlight HABs over wide areas. By utilizing hyperspectral remote sensing with UAS, HABs can be monitored and detected more efficiently. This new state-of-the-art research methodology will allow for targeted assessment, monitoring, and design of HABs management plans that can be adapted for other impacted inland freshwater bodies. 

How to cite: Pirro, J., Thomas, C., Wallace, C., Alpert, Z., Tuohy, M., de Smet, T., Yokota, K., Jackson, P., Cleckner, L., Wigdahl-Perry, C., Young, K., Amejecor, K., and Scheer, A.: Utilizing Hyperspectral Remote Sensing to Detect Concentration of Cyanobacteria in Freshwater Ecosystems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-629, https://doi.org/10.5194/egusphere-egu22-629, 2022.

The lignite mine called 'Friendship of Nations - Babina Shaft', located on the border between Poland and Germany, was closed almost 50 years ago. Despite the cessation of mining works (carried out by opencast and underground methods) and carrying out reclamation process, the negative effects of the former mineral exploitation are still observed in this region (e.g. sinkholes, local flooding, subsidence). It should be emphasized that the area of ​​the currently closed mine is also characterized by a complicated glaciotectonic structure, which is the result of successive glacial periods in the past. Both factors, i.e., the past mining activity and geological conditions, may affect the condition of soils and vegetation of the analysed area. The aim of this study was to determine, whether and to what extent the former lignite mining and the complicated glaciotectonic structure had an impact on the changes in the state of plant cover and soils, noted in the period of 1989-2019. A new index, Mining and Geology Impact Factor (MaGIF), was developed to describe the strength and the nature of the relationship between the aforementioned factors within four test fields, based on coefficients’ values and variables of six Ordinary Least Squares (OLS) models. In the research 12 independent variables, representing geological and mining conditions of the area, were prepared. The dependent variables, statistics of selected spectral indices obtained for 1989-2019, were determined in the GIS environment, within individual pixels of the research area. In this study, two vegetation indices (NDVI and NDII) and four soil indices (DSI, SMI, Ferrous Minerals and SI3), calculated on the basis of Landsat TM/ETM +/OLI images, were used. The values of the obtained MaGIF index were ​​in the range of -9.99 - 0.62, and their distribution in the test fields proved that the former mining and geological conditions had the strongest impact on the vegetation and soils of the central part of field no. 1, as well as on north-western and south-eastern parts of field no. 4. The nature of the influence of explanatory factors on the indicated components of the environment was negative (an increase or decrease in the value of the independent variable correlated with a decrease or increase in the value of a given spectral index, respectively). In the western and southern parts of field no. 1, eastern part of field no. 3, central and eastern parts of field no. 4, as well as in a major part of field no. 2, the influence of explanatory factors was the smallest. Only in fields no. 2 and 4, the small zones of positive impact of the independent variables were observed. The results indicate that the former mining and geological conditions have a significant influence on the condition of the vegetation and soils of post-mining areas. Therefore, it is extremely important to monitor the changes taking place in these regions in order to undertake appropriate preventive works and eliminate the resulting damage.

How to cite: Buczyńska, A. and Blachowski, J.: New index for assessment of environment in post-mining area – Mining and Geology Impact Factor (MaGIF), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1107, https://doi.org/10.5194/egusphere-egu22-1107, 2022.

EGU22-1185 | Presentations | GI6.1

Application of UAS laser scanning for precision crop monitoring in Hungary 

László Bertalan, Péter Riczu, Róbert Bors, Szilárd Szabó, and Anette Eltner

Airborne Laser Scanning (ALS) is a widely used method in Earth science, Agriculture or Forestry. This method could provide high resolution and accurate spatial data for the better understanding of surface structures, moreover, based on the laser pulses, it can even show important features of the ground below dense vegetation. However, these ALS surveys requires specially designed aircrafts, pilots and operators, detailed flight planning, which leads to an expensive way of data analysis. The application of laser scanners for Unmanned Aerial Systems (UAS) has started in the last few years. These sensor payloads provide less weight and size and decreased accuracy compared to the traditional ALS surveys but still serve as more reliable mapping technology contrary to the photogrammetric methods in many cases. However, several new UAS laser scanners are being developed but their accuracy conditions and applicability for agricultural monitoring must be studied in many ways.

In our study we applied the novel Zenmuse L1 LiDAR sensor mounted on a DJI Matrice M300 RTK UAS. We surveyed a ~50 ha area of corn field near Berettyóújfalu, Hungary in the summer of 2021. Our aim was to reveal the applicability of UAS laser scanning for the precise ground surface reconstruction. In this period, the corn was under irrigated condition, therefore, extensive weed patches were observed between the paths. The laser scanner ground filtering data was compared to a photogrammetry-based aerial survey that we have carried out at the beginning of the vegetation cycle at the same parcel. Our results showed both the potentials and limitations of this sensor for precision agriculture. The laser beams produced significant amount of noise between the paths that had to be cleaned to extract the ground surface below the corn canopy. Based on our data processing methods we were able to delineate similar drainage networks within the parcel that was also processed from the initial aerial survey. However, the UAS LiDAR gained the most accurate surface reconstruction at the more clear grassland patches around the parcel. 

L. Bertalan was supported by the INKP2022-13 grant of the Hungarian Academy of Sciences. This research was funded by the Thematic Excellence Programme (TKP2020-NKA-04) of the Ministry for Innovation and Technology in Hungary. This research was also influenced by the COST Action CA16219 “HARMONIOUS - Harmonization of UAS techniques for agricultural and natural ecosystems monitoring”.

How to cite: Bertalan, L., Riczu, P., Bors, R., Szabó, S., and Eltner, A.: Application of UAS laser scanning for precision crop monitoring in Hungary, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1185, https://doi.org/10.5194/egusphere-egu22-1185, 2022.

EGU22-2545 | Presentations | GI6.1

Trends in vegetation changes over wetland areas in Denmark using remote sensing data 

Joan Sebastian Gutierrez Diaz, Mogens Humlekrog Greve, and Lis Wollesen de Jonge

Land cover dynamics play a vital role in many scientific fields, such as natural resources management, environmental research, climate modeling, and soil biogeochemistry studies; thus, understanding the spatio-temporal land cover status is important to design and implement conservation measures. Remote sensing products provide relevant information regarding spatial and temporal changes on the earth’s surface, and recently, time series analyses based on satellite images, and spectral indices have become a new tool for accurate monitoring of the spatial trend, and land cover changes over large areas. This work aims to determine the trends of vegetation spectral response expressed as the Normalized Difference Vegetation Index (NDVI) over the period 2005 and 2018 and compare these trends with the land-use and cover changes between 2005 and 2018 in wetland areas across Denmark. Change detection methods between two years based on bi-temporal information may lead up to the detection of pseudo-changes, which hinders the land-use and cover monitoring process at different scales. We studied the potentiality of including NDVI temporal curves derived from a yearly time-series Landsat TM images (30-m spatial resolution) to obtain more accurate change detection results. We computed the NDVI temporal trends using pixel-wise Theil-Sen and Man-Kendall tests, then we explored the relationship between NDVI trends and the different land-use and cover change classes. We found a significant relationship between NDVI trends and changes in land use and cover. Changes from cropland to wetland and cropland to forest coincided with statistically significant (p≤0.05) negative NDVI, and positive NDVI trends, respectively. Changes from grasslands to permanent wetlands corresponded with statistically significant negative NDVI trends. The difference in vegetation productivity trends could be indicative of the combined effect of human activity and climate. We show that this combined analysis provides a more complete picture of the land use and cover changes in wetland areas over Denmark. This analysis could be improved if the NDVI time series is seasonally aggregated.

How to cite: Gutierrez Diaz, J. S., Humlekrog Greve, M., and Wollesen de Jonge, L.: Trends in vegetation changes over wetland areas in Denmark using remote sensing data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2545, https://doi.org/10.5194/egusphere-egu22-2545, 2022.

EGU22-2711 | Presentations | GI6.1

Open data sets on spectral properties of boreal forest components 

Miina Rautiainen, Aarne Hovi, Petri Forsström, Jussi Juola, Nea Kuusinen, and Daniel Schraik

Spectral libraries of different components forming forests – such as leaves, bark and forest floor – are needed in the development of remote sensing methods and land surface models, and for understanding the shortwave radiation regime and ecophysiological processes of forest canopies. This poster summarizes spectral libraries of boreal forest vegetation and lichens collected in several projects led by Aalto University. The spectral libraries comprise reflectance and transmittance spectra of leaves (or needles) of 25 tree species, reflectance spectra of tree bark, and reflectance spectra of different types of forest floor vegetation and lichens. The spectral libraries have been published as open data and are now readily available for the community to use. 

How to cite: Rautiainen, M., Hovi, A., Forsström, P., Juola, J., Kuusinen, N., and Schraik, D.: Open data sets on spectral properties of boreal forest components, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2711, https://doi.org/10.5194/egusphere-egu22-2711, 2022.

Although the C–H chains of petroleum derivatives display unique absorption features in the short-wave infrared (SWIR), it is a challenge to identify plastics on terrestrial surfaces. The diverse reflectance spectra caused by chemically varying polymer types and their different kinds of brightness and transparencies, which are, moreover, influenced further by the respective surface backgrounds. This paper investigates the capability of WorldView-3 (WV-3) satellite data, characterized by a high spatial resolution and equipped with eight distinct and relatively narrow SWIR bands suitable for global monitoring of different types of plastic materials. To meet the objective, hyperspectral measurements and simulations were conducted in the laboratory and by aircraft campaigns, based on the JPL-ECOSTRESS, USGS, and inhouse hyperspectral libraries, all of which are convolved to the spectral response functions of the WV-3 system. Experiments further supported the analyses wherein different plastic materials were placed on different backgrounds, and scaled percentages of plastics per pixel were modeled to determine the minimum detectable fractions. To determine the detectability of plastics with various chemical and physical properties and different fractions against diverse backgrounds, a knowledge-based classifier was developed, the routines of which are based on diagnostic spectral features in the SWIR range. The classifier shows outstanding results on various background scenarios for lab experimental imagery as well as for airborne data and it is further able to mask non-plastic materials. Three clusters of plastic materials can clearly be identified, based on spectra and imagery: The first cluster identifies aliphatic compounds, comprising polyethylene (PE), polyvinylchloride (PVC), ethylene vinyl acetate copolymer (EVAC), polypropylene (PP), polyoxymethylene (POM), polymethyl methacrylate (PMMA), and polyamide (PA). The second and third clusters are diagnostic for aromatic hydrocarbons, including polyethylene terephthalate (PET), polystyrene (PS), polycarbonate (PC), and styrene-acrylonitrile (SAN), respectively separated from polybutylene adipate terephthalate (PBAT), acrylonitrile butadiene styrene (ABS), and polyurethane (PU). The robustness of the classifier is examined on the basis of simulated spectra derived from our HySimCaR model, which has been developed inhouse. The model simulates radiation transfer by using virtual 3D scenarios and ray tracing, hence, enables the analysis of the influence of various factors, such as material brightness, transparency, and fractional coverage as well as different background materials. We validated our results by laboratory and simulated datasets and by tests using airborne data recorded at four distinct sites with different surface characteristics. The results of the classifier were further compared to results produced by another signature-based method, the spectral angle mapper (SAM) and a commonly used technique, the maximum likelihood estimation (MLE). Finally, we applied and successfully tested the classifier on WV-3 imagery of sites known for a high abundance of plastics in Almeria (Spain), Cairo (Egypt), and Accra, (Ghana, West Africa). Both airborne and WV-3 data were atmospherically corrected and transferred to “at-surface reflectances”. The results prove the combination of WV-3 data and the newly designed classifier to be an efficient and reliable approach to globally monitor and identify three clusters of plastic materials at various fractions on different backgrounds.

How to cite: Zhou, S.: A knowledge-based, validated classifier for the identification of aliphaticand aromatic plastics by WorldView-3 satellite data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3130, https://doi.org/10.5194/egusphere-egu22-3130, 2022.

EGU22-3532 | Presentations | GI6.1

The use of satellite data to support the volcanic monitoring during the last Vulcano island crisis 

Malvina Silvestri, Federico Rabuffi, Vito Romaniello, Massimo Musacchio, and Maria Fabrizia Buongiorno

The “La Fossa” summit crater of Vulcano island (Sicily, Italy) showed increasing volcanic activities, characterized by strong gases emissions and high soil temperatures, during July 2021 (https://cme.ingv.it/stato-di-attivita-dei-vulcani-eoliani/crisi-idrotermale-vulcano-2021). The National Civil Protection Department declared the “yellow alert” level and the Mayor of the island issued an order to prohibit citizens to stay in areas surrounding the harbor due to large amounts of gases emitted; an alternative accommodation was sought for about 250 persons. In this work, we report and analyze the surface temperature estimated by using satellite data (ASTER and Landsat-8) from 2000 to 2022. These analyses extend the study described in “Silvestri et al., 2018” which reports a time series of thermal anomalies from 2000 to 2018, with a focus on two specific sites of the Vulcano island: “La Fossa” and “Fangaia”. So, we updated the dataset up to 2022 and analyzed space-borne remotely sensed data of the surface temperature on the whole island. We applied the Pixel Purity Index technique to ASTER and Landsat-8 satellite data (GSD=90 m) in order to detect pixels that are most relevant from the thermal point of view; thus, we used these pixels as significant points for the time series analysis. Moreover, strong carbon dioxide emissions could be detected from satellite data acquired by the new Italian space mission PRISMA (GSD=30 m) carrying onboard a hyperspectral sensor operating in the range 0.4-2.5 µm; this possibility will be explored by analyzing data on active fumaroles in the island. The goal of the analysis is also to verify if volcanic activity variations (in terms of thermal anomalies and gases emissions), in the Vulcano island, can be detected by satellite data.

How to cite: Silvestri, M., Rabuffi, F., Romaniello, V., Musacchio, M., and Buongiorno, M. F.: The use of satellite data to support the volcanic monitoring during the last Vulcano island crisis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3532, https://doi.org/10.5194/egusphere-egu22-3532, 2022.

Forecasting volcanic and limnic eruption for improving early warning systems is crucial to prevent severe impact on human lives. One of the main triggers of explosive eruptions is volcanic gases which, contrary to the atmosphere, are easily detected in water column, particularly using hydro-acoustic methods [1]. Two pioneering studies have monitored gas venting into Kelud Crater Lake (Indonesia) from a hydroacoustic station shortly before a Plinian eruption in 1990 [1] and, nearly two decades later, by empirically quantifying CO2 fluxes by acoustic measurements in the same lake just before a non-explosive eruption [2]. However, despite hydroacoustic detection capabilities, fundamental advances are limited by technology performances. Overall acoustic detection of a bubble field is easy, while its quantification remains complex due to the 3D structure of clouds, heterogeneous bubbles sizes and acoustic interactions between them. It is thus necessary to accurately map the different bubble clouds, to monitor their evolution through time to reduce the volcanic risk, which is major in aqueous environments. Here, we present preliminary results of water column gas distributions and quantification from an Eifel crater lake (Germany), using iXblue Seapix 3D multi splitbeam echosounder. SeapiX acoustic array is based on very special geometry, a dual/steerable multibeam echosounder with a Mills Cross configuration. It allows a 120° x120° coverage (quasi realtime coverage) with 1.6° resolution, made by 128 single elements. All beams in all steering direction process Split Beam TS measurement to provide true acccurate volumic TS from all single target in the volume. Backscatter profiles of elements in the water column allowed to distinguish fish and gas bubbles, which demonstrates a potential for the development of an automatic gas detection module using the Seapix software. Ongoing research on the Target Strengh (TS) of bubbles suggest they are of very small size (35 μm), much smaller than observed elsewhere using single beam echosounders, which might also explain why, in the same spot, we did not observe gas bubbles using camera mounted on ROV. Using the steerable capability of the system, a recent mission performed a 4D monitoring of gas bubbling of a single gas plume, in a static position placed on a USV and anchored, raising new perspectives to anticipate the tipping point of a critical enhancement of gas release and to mitigate the volcanic risk.

[1] Vandemeulebrouck et al (2000) J. Volcanol. Geotherm. Res 97, 1-4: 443-456

[2] Caudron et al (2012) JGR: Solid Earth 117, B5

 

How to cite: Jouve, G., Caudron, C., Matte, G., and Mosca, F.: Monitoring gas dynamics in underwater volcanic environments using iXblue SeapiX multi split beam echosounder: an example from the Laacher See (Eifel, Germany), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3583, https://doi.org/10.5194/egusphere-egu22-3583, 2022.

EGU22-4460 | Presentations | GI6.1

Remotely sensed dune movement rates in desert margins of Central Asia over five decades using satellite imagery 

Lukas Dörwald, Janek Walk, Frank Lehmkuhl, and Georg Stauch

Remote sensing is being used widely to detect, map, and monitor environmental changes and remains a rapidly developing field. The detection of dune movement rates is carried out in field since the 20th century and through remote sensing, once the technical requirements were met in the 1970th (Hugenholtz et al. 2011). A wide variety of imagery from the last four decades is freely available in the archives of Sentinel-2 and Landsat 5 to 8 satellite images with spatial resolutions ranging between 10 and 25 meters. Complementing these data sources, in this study, we additionally used CORONA KH-4B images from the 1960s and 1970s. Despite its age, the KH-4B satellite delivered a considerably high spatial resolution of up to 1.8 m, thus bridging a considerable time gap of high resolution imagery and enabling the detection and mapping of singular dunes and dune fields. These satellites were originally used to record military intelligence images before being declassified for scientific use in 1995. After georeferencing, these images were utilized to detect and quantify the rates and directions of sand dune movement as well as for the estimation of dune height through a simple trigonometric approach.

We focus on single dunes and their movement rates in the high-altitude intramontane Gonghe Basin in Central Asia. The location of the study area at the north-eastern edge of the Asian summer monsoon and the mid-latitude Westerlies makes it especially sensitive to climatic variability (Vimpere et al. 2020). The dominant south easterly dune migration directions are in good agreement with the prevailing wind patterns. Dune heights of ~8–28 meters and ~3-31 meters for the late 1960s and 2020s, respectively, were calculated. Also, movement rates of under one meter up to ~24 meters per year were assessed for the time range of the late 1960s and 2020s.References:

Hugenholtz, C., H., Levin, N., Barchyn, T.E., Baddock, M., C. (2012): Remote sensing and spatial analysis of Aeolian sand dunes: A review and outlook. Earth-Science Reviews 111, 319334, https://doi.org/10.1016/j.earscirev.2011.11.006

Vimpere, L., Watkins, S., E., Castelltort, S. (2021): Continental interior parabolic dunes as a potential proxy for past climates. Global and Planetary Change, 206: 103622, https://doi.org/10.1016/j.gloplacha.2021.103622

How to cite: Dörwald, L., Walk, J., Lehmkuhl, F., and Stauch, G.: Remotely sensed dune movement rates in desert margins of Central Asia over five decades using satellite imagery, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4460, https://doi.org/10.5194/egusphere-egu22-4460, 2022.

EGU22-6153 | Presentations | GI6.1 | Highlight

An integrated approach for environmental multi-source remote sensing 

Maria Marsella, Angela Celauro, and Ilaria Moriero

 

Remote sensing measurements have benefited from a great technological improvement, which has allowed a higher degree of automation while increasing spatial and temporal resolution of the collected data. Multi-     scale and multi-frequency optical and radar satellite sensors, often adopted in an integrated manner, are starting to provide efficient solutions for controlling and monitoring rapidly evolving urban and natural areas. On the other hand, close range remote-sensing techniques, such as operated by UAV platforms, and innovative ground-based instruments offer, respectively, the chance to downscale the observation performing site-specific analysis at an enhanced resolution and to collect in-situ dataset for calibration and data quality. By improving the quantity and quality of the collected data, a better understanding of the in-going processes is possible and the setting up of a numerical forecast model for future scenarios.

 

Therefore, implementation of integrated techniques for environmental monitoring turns out to be a strategic solution to analyze hazardous areas at different spatial and temporal resolution. Research devoted to the optimization of data processing tools by means of AI algorithms has evolved with the aim of improving the level of information and its reliability. In this context, a great impact is linked to the availability of open data and open-source processing tools distributed after the Copernicus Program.

 

A review of the available technologies for environmental monitoring is provided including examples on experimental cases in areas affected by natural hazards (volcanic eruptions, landslides, coastal erosion, flooding, etc.) and human activities that can produce incidental damages on the environment (urbanization, agriculture, infrastructures, landfills, dumpsites, pollutions, etc.). In addition, the same approach is useful for monitoring the degradation of the cultural heritage sites. Finally, the capability of collecting fat at a global level contributed to the analysis of environmental and economic impacts consequent the Covid-19 pandemic.

 

How to cite: Marsella, M., Celauro, A., and Moriero, I.: An integrated approach for environmental multi-source remote sensing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6153, https://doi.org/10.5194/egusphere-egu22-6153, 2022.

EGU22-6983 | Presentations | GI6.1

Geochemical investigations of 100 superficial soils observed by Sentinel 2 and PRISMA 

Gian Marco Salani, Michele Lissoni, Stefano Natali, and Gianluca Bianchini

Geochemical investigations of agricultural soils are fundamental to characterize pedosphere dynamics that sustain ecosystem services linked with agriculture. Parameters like soil moisture, soil organic matter (SOM), and soil organic carbon (SOC) are strong instruments to evaluate carbon sink potential.

Satellite Earth Observation is a significant source of free data that can be linked to soil characteristics and dynamics and employed to produce temporal series. Access to these data is nowadays facilitated by platforms such as ADAM (https://adamplatform.eu), which allow users to quickly search for, visualize and subset data products, greatly reducing the volume of data that end users must handle.

In this work we demonstrate the usefulness of such systems by carrying out a geochemical investigation of 100 superficial (0-15 cm) soil samples collected in the province of Ferrara (North-Eastern Italy) and using the ADAM platform to associate to each a time series of Sentinel 2 data. The samples were collected in October 2021 in fields that were ploughed or mono-cultivated at maize, soybean, rice, and winter vegetables. To obtain the average soil properties over a spatial scale larger than the satellite sensor resolution, we adopted a composite sampling strategy, merging 5 sub-samples collected at the vertexes and at the center of a 30x30 m2 area. Soil granulometry was recognized from clay to medium sand, with exception of peat deposits. Soil moisture, and SOM, contents were estimated by loss on ignition (LOI), respectively at 105°C (values from 0.3 to 7.4 wt%), and 550°C (values from 2.1 to 21.0 wt%). SOC contents (values from 0.7 to 9.3 wt%) were determined through DIN19539 analysis performed with an Elementar soliTOC Cube. Using the ADAM platform, we associated a temporal series from 2016 to 2021 of the Sentinel 2 NDVI data product to each sampling location, using a cloud coverage mask to eliminate values taken on cloudy days. Localized phenological cycles for each year are recognizable in the remotely-sensed data. Hence, our database describes for each parcel, geochemical parameters and vegetative temporal series.

In a separate study, we also attempted to train a neural network to predict geochemical properties from the soil spectrum measured by the hyperspectral satellite PRISMA. We used the geochemical properties of our 100 samples as training data, associated with the PRISMA spectra of the sampling locations measured on April 7 2020, when, according to our NDVI data, none was covered in vegetation.

How to cite: Salani, G. M., Lissoni, M., Natali, S., and Bianchini, G.: Geochemical investigations of 100 superficial soils observed by Sentinel 2 and PRISMA, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6983, https://doi.org/10.5194/egusphere-egu22-6983, 2022.

EGU22-6995 | Presentations | GI6.1

AI-based hydromorphological assessment of river restoration using UAV-remote sensing 

Felix Dacheneder, Karen Schulz, and Andre Niemann

Many hydromorphological restoration measures have been applied on German water courses since 2000 the European water framework directive has been induced. The measures aim to improve the diversity of habitat alteration. Often a positive effect on aquatic biota can’t be detected, therefore implementation and the hydromorphological development of such measures can be questioned. But also the common monitoring and assessment methods for physical river habitat mapping can be questioned as they are limited in spatial scale and objectiveness of the mapper itself.

In the last decade, Unmaned Areal Vehicle (UAV) in combination with high-resolution sensors open new opportunities in a spatial and temporal scales. This research shows a case study of the river Lippe for the detection of hydromorphological habitat structures using Structure from Motion (SfM) and Deep learning based classification methods. In detail, this work discusses the difficulties of creating digital surface and orthomosaics from field survey data, but also shows results from a case study using a deep learning classification approach to identify physical river habitat structures.

How to cite: Dacheneder, F., Schulz, K., and Niemann, A.: AI-based hydromorphological assessment of river restoration using UAV-remote sensing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6995, https://doi.org/10.5194/egusphere-egu22-6995, 2022.

EGU22-8296 | Presentations | GI6.1

Satellite imagery band ratio for mapping the open pit mines: A preliminary study 

Anita Punia, Rishikesh Bharti, and Pawan Kumar Joshi

Indices are designed to differentiate land use and land cover classes to avoid misinterpretation of landscape features. The resemblances of spectral reflectance of mines with urban built-up and barren land cause difficulties in identification of objects. Open pit mines of Rampura-Agucha for Zn and Pb were selected for this study. The freely available data of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) was selected from the year of 2001 and 2003. It is observed that b1-b5/b1+b5 equation of ASTER imagery significantly differentiate Zn-Pb mine from urban settlement and other features. The reflected range (µm) for b1 and b5 is 0.52-0.60 (Visible and Near-Infrared) and 2.145-2.185 (Shortwave Infrared) respectively. The pixel values indicate higher reflectance of open pit suggesting feasibility of equation for differentiating it from barren and built-up area. The mine is rich in sphalerite followed by galena, pyrite and pyrrhotite in different proportions of abundance. Spectral reflectance depends on type of minerals hence need further studies to develop the index according to specific minerals and mines. In the mining regions, the role of temperature, moisture content, vegetation covers and high concentration of pollutants in variation of spectral reflectance are highly important. The developed index would be beneficial for tracing the extent of overburden dumps, tailings and mines at faster rate.

How to cite: Punia, A., Bharti, R., and Joshi, P. K.: Satellite imagery band ratio for mapping the open pit mines: A preliminary study, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8296, https://doi.org/10.5194/egusphere-egu22-8296, 2022.

EGU22-8417 | Presentations | GI6.1

Impact of different corner reflectors installation on InSAR time-series 

Roland Horváth, Bálint Magyar, and Sándor Tóth

Identification of relatively stable ground control points is always difficult in satellite-based remote sensing microwave technology. In our case, we have analyzed the amplitude and phase of backscattered signal of artificial objects in the resolution cell. In 2020, we have temporarily installed a compact active transponder (CAT) to the top of the Satellite Geodetic Observatory (SGO). During this probation period we had tested the operation of this electronic corner reflector (ECR).

In November, 2021 we have deployed, adjusted and precisely aligned the CAT and also mounted a 90 cm inner leg of passive double-backflip triangular corner reflector pair (part of the Integrated Geodetic Reference Station) to serve as Persistent Scatterers. Hence, we have observed the behaviour of the complex microwave signal using interferometric synthetic aperture radar technique (InSAR), utilizing Sentinel-1 SAR high resolution images. We have concentrated to demonstrate the effect of the corner reflector (CR) installation: estimate the Signal-to-Clutter Ratio (SCR), calculate the Radar Cross Section (RCS), define the phase center in sub-pixel dimension over well-specified stack of time-series.

We are expecting and focusing to integrate the CRs as benchmarks, into our developing processing algorithm system to achieve more accurate results of surface displacement using ground control points. In addition, the function of this project is to contribute and ensure the extension of our passive corner reflector reference network (SENGA). In this paper, we present the interpretation of the recent outcomes.

How to cite: Horváth, R., Magyar, B., and Tóth, S.: Impact of different corner reflectors installation on InSAR time-series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8417, https://doi.org/10.5194/egusphere-egu22-8417, 2022.

EGU22-8825 | Presentations | GI6.1

The use of low-cost sensors for monitoring coastal climate hazards and developing early warning support against extreme events. 

Tasneem Ahmed, Leo Creedon, Iulia Anton, and Salem Gharbia

Coastal areas are socially, economically, and environmentally intensive zones. Their risk to various natural coastal hazards like coastal flooding, erosion, and storm surges has increased due to climate-induced changes in their forcing agents or hazard drivers (e.g. sea-level rise). The increased exposure (e.g. dense population living near the coast) and vulnerability (e.g. insufficient adaptation) to these hazards in the coastal areas have complicated the adaptation challenges.

Thus, monitoring coastal hazards is essential to inform suitable adaptation to increase the climate resilience of the coastal areas. In monitoring coastal climate hazards to develop coastal climate resilience, both the forcing agents and the coastal responses should be observed.

As coastal monitoring is often expensive and challenging, creating a database through a systematic analysis of low-cost sensing technologies, like UAV photogrammetry for monitoring the hazards and their drivers would be beneficial to the stakeholders. Real-time information from these low-cost sensors in complement to the existing institutional sensors will facilitate better adaptation policies including the development of early warning support for building coastal resilience. In addition, it would also provide a valuable dataset for validating coastal numerical models and providing insights into the relationship between these hazards and forcing agents. Additionally, such low-cost sensors would also create opportunities for engaging citizens in the data collection process, for efficient data collection, and increasing scientific literacy amongst the general public. For instance, in the Sensing Storm Surge Project (SSSP), citizen science was used to collect technical data to characterise estuarine storm surges, generating data useable in peer-reviewed Oceanography journals. Coastal areas show complex morphological changes in response to the forcing agents over a wide range of temporal and spatial scales. Thus, monitoring the hazards with a sufficient temporal and spatial resolution is imperative to distinguish the changes in these hazards/drivers due to climate change from natural variability. This will not only help address the response strategies to these hazards but also adjust these response strategies according to the changing vulnerability of a particular region.

The database of the low-lost sensors thus created is in no way exhaustive since those have been retrieved through a certain combination of keywords in databases like Sciencedirect, Web of Science, and Scopus, nonetheless it is useful as these are the latest low-cost sensors available to monitor the major coastal hazards in the vulnerable coastal regions.

How to cite: Ahmed, T., Creedon, L., Anton, I., and Gharbia, S.: The use of low-cost sensors for monitoring coastal climate hazards and developing early warning support against extreme events., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8825, https://doi.org/10.5194/egusphere-egu22-8825, 2022.

EGU22-9328 | Presentations | GI6.1 | Highlight

Mapping NO2 pollution in Piedmont Region (Italy) using TROPOMI: preliminary results 

Adele Campus, Fiorella Acquaotta, and Diego Coppola

Recently, numerous agencies and administrations in their latest reports show how it’s impossible to overlook the negative impact of atmospheric air pollution on human health. In this regard, it’s essential to be able to understand the spatial and temporal distribution of the concentration of main pollutants, and its ways to change. Among the numerous strategies proposed to tackle this problem, from the ’70s the study of satellite data assumed a key role, extending the analyzes carried out only with ground tools.

In this work we analyzed the data acquired by TROPOMI (TROPOspheric Monitoring Instrument), a multispectral imaging spectrometer mounted onboard the ESA Copernicus Sentinel-5P satellite (orbiting since October 2017) and specifically focused on mapping atmospheric composition. In particular, we processed the TROPOMI NO2 products acquired over Piedmont Region (Italy) between 2018 and 2021.  We obtain preliminary results by comparing the satellite-derived tropospheric NO2 columns data with ground-based NO2 concentration acquired by the ARPA-Piemonte network in different urban and geomorphological contexts. In particular, we compared the TROPOMI-derived time series with the acquisitions of ground stations located in urban and suburban areas (e.g. in the city of Turin), identified as “traffic stations”, and in rural areas (low population density and countryside areas) identified as “background stations”. The results allow us to investigate the correlation and coherence between the two datasets and discuss the added values and limits of satellite data in different environmental contexts, with the prospective of providing NO2 concentration maps of the Piedmont Region.

How to cite: Campus, A., Acquaotta, F., and Coppola, D.: Mapping NO2 pollution in Piedmont Region (Italy) using TROPOMI: preliminary results, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9328, https://doi.org/10.5194/egusphere-egu22-9328, 2022.

EGU22-10455 | Presentations | GI6.1

Large and small-scale multi-sensors remote sensing for dumpsites characterization and monitoring 

Angela Celauro, Matteo Cagnizi, Annalisa Cappello, Emilio D'Amato, Peppe Junior Valentino D'Aranno, Gaetana Ganci, Luigi Lodato, Ilenia Marini, Maria Marsella, and Ilaria Moriero

Remote sensing techniques are an ever-growing reliable means for monitoring, detecting and analysing the spatial and temporal changes of solid waste and landfill sites. In this paper, different UAV and satellite sensors are used to detect, characterize and monitor dumpsites in Sicily (Italy). In particular, data acquired and processed are (i) high-density point clouds detected from LIDAR sensor; (ii) optical photograms with a resolution of 3 cm; (iii) thermal photograms with a resolution of 5 cm/pixel and (iv) multispectral photograms with 5 cm/pixel. High spatial resolution UAV multispectral and thermal remote sensing allowed for the extraction of indicators, such as the Normalized Difference Vegetation Index (NDVI) and the Land Surface Temperature (LST), useful to characterize the changes in the vegetation and the skin temperature increase due to organic waste decomposition, respectively. On the other hand, the processing of UAV optical images to extract high-resolution orthophotos and their integration with high-density point clouds obtained from LIDAR, were used to provide the identification of the effective perimeter of the landfill body and the extraction of waste volumes. These products were integrated and compared with those obtained from different kinds of medium-to-high spatial resolution satellite images, such as from Landsat, Aster, Sentinel-2 and Planetscope sensors. Results show that UAV data represents an excellent opportunity for detecting and characterizing dumpsites with an extremely high detail, and that the joint use with satellite data is recommended for having a comparison on different scales, allowing continuous monitoring. Additional SAR data methodologies will be investigated for evaluating the landfill body landslides over the years that could be integrated with high resolution satellite multispectral and hyperspectral images for monitoring dumpsites environmental impact.

How to cite: Celauro, A., Cagnizi, M., Cappello, A., D'Amato, E., D'Aranno, P. J. V., Ganci, G., Lodato, L., Marini, I., Marsella, M., and Moriero, I.: Large and small-scale multi-sensors remote sensing for dumpsites characterization and monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10455, https://doi.org/10.5194/egusphere-egu22-10455, 2022.

EGU22-10490 | Presentations | GI6.1

Estimation of maize sowing dates from Sentinel 1&2 data, over South Piedmont 

Matteo Rolle, Mehrez Zribi, Stefania Tamea, and Pierluigi Claps

Information of crop sowing dates is important to enhance the accuracy of crop models and for the assessments of crop requirements during the growing seasons. The sowing calendars of densely harvested areas are often driven by heterogeneous factors like annual crop rotations, crop switches and the alternation of winter and summer products over the same fields. Remote sensing is widely used for agricultural applications, especially to maximize crop yields through precision farming tools. Indices combining optical and infrared bands are particularly suitable for the crop classification algorithms and the plant health monitoring. Synthetic Aperture Radar (SAR) is often used in agriculture to classify irrigated and rainfed fields, due to its high sensitivity to soil water content. Despite SAR data are also used to identify changes in the ground roughness, this information has been rarely combined with optical data to identify crop sowing dates at the field scale.

In this study, SAR data from Sentinel-1 and NDVI derived from multispectral (MSI) acquisitions of Sentinal-2 have been used to identify the sowing dates of maize over a densely harvested pilot area in South Piedmont (Italy). NDVI data have been used to identify maize fields together with the agricultural geodatabase provided by the Piedmont public authority. The moisture-induced noise of SAR data has been filtered to avoid the impact of precipitation on the radar signal during the bare soil phase. Combining the VH and VV bands acquired by Sentinel-1 it was possible to identify the moment when maize plants break through the soil in each field.

Results show a good alignment with the information of sowing periods acquired from local farmers, also in terms of multiple growing seasons due to the presence of different maize types. The distribution of sowing dates points out that most of the maize is sown during the second half of May, while the other fields are sown even a month later after the harvesting of winter crops. The method proposed in this study may lead to significant applications in the agriculture monitoring, providing useful information for crop-related management policies. The combined use of SAR and NDVI data has the potential to improve the crop models for the benefit of yields and food security.

How to cite: Rolle, M., Zribi, M., Tamea, S., and Claps, P.: Estimation of maize sowing dates from Sentinel 1&2 data, over South Piedmont, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10490, https://doi.org/10.5194/egusphere-egu22-10490, 2022.

EGU22-10607 | Presentations | GI6.1

Use of Rapideye images from the planet platform to update vegetation cover studies in Tenosique, Tabasco, Mexico. 

Jacob Jesús Nieto Butrón, Nelly Lucero Ramírez Serrato, Mariana Patricia Jácome Paz, Tania Ximena Ruiz Santos, and Juan Manuel Núñez

Tenosique is a small town located on the border between Mexico and Guatemala, on the banks of the Usumacinta River. The area is considered a tropical climate with swampy and jungle areas. Previous studies had exposed the changes in vegetation cover related to the public policies applied at the site. Some examples of these policies are: the 1917 agrarian reform of land distribution to the peasants for cultivation, in 1938 concessions were made to national and foreign companies to exploit forest resources; in 1958 the agrarian reform for cultivation made the agricultural zone advance towards the jungle forest; in 1965 the food crisis promoted livestock; in 1976 it opted for the extraction of oil, and with the economic crisis in 1982 the financial support to the peasants and their ejidos is withdrawn, and finally in 2008 this area becomes a flora and fauna protection area. Past studies have been developed from a social and artistic point of view as well as quantifiable with the use of Landsat satellite images, covering large temporalities as well as a regional coverage scale, however, the results resolutions have made their interpretation difficult, reporting only the 20% plant loss over time. The objective of this project is to update the pre-existing study using high-resolution images, on a smaller surface. For this, 5-meter resolution Rapideye satellite images were downloaded from the Planet platform (Planet Application Program Interface: In Space for Life on Earth) with the help of an educational license obtained from an artistic quality project. The temporality of the images ranges from 2010 to 2020. The methodology includes corresponding atmospheric corrections, the supervised classification, and the coverage analysis obtained from the application of the Normalized Difference Vegetation Index (NDVI).  Conclusions show the impact of the inputs resolution improvement in the study.

How to cite: Nieto Butrón, J. J., Ramírez Serrato, N. L., Jácome Paz, M. P., Ruiz Santos, T. X., and Manuel Núñez, J.: Use of Rapideye images from the planet platform to update vegetation cover studies in Tenosique, Tabasco, Mexico., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10607, https://doi.org/10.5194/egusphere-egu22-10607, 2022.

EGU22-11409 | Presentations | GI6.1

Deep Learning and Sentinel-2 data for artisanal mine detection in a semi-desert area 

María Cuevas-González, Lorenzo Nava, Oriol Monserrat, Filippo Catani, and Sansar Raj Meena

In sub-Saharan Africa, artisanal and small-scale mining (ASM) represents a source of subsistence for a significant number of individuals. While 40 million people officially work in ASM across 80 countries, more than 150 million rely indirectly upon ASM. However, because ASM is often illegal, and uncontrolled, the materials employed in the excavation process are highly dangerous for the environment, as well as for the people involved in the mining activities. One of the most important aspects regarding ASM is their localization, which currently is missing in most of the African regions. ASM inventories are crucial for the planning of safety and environmental remediation interventions. Furthermore, the past location of ASM could be used to predict the spatial probability of the creation of newborn mines. To this end, we propose a Deep Learning (DL) based approach able to exploit Sentinel-2 open-source data and a non-complete small-size mine inventory to accomplish this task. The area chosen for this study lies in northern Burkina Faso, Africa. The area is chosen for its peculiar semi-desert environment which, in addition to being a per se challenging mapping environment, presents a wide spatial variability. Moreover, given the high level of danger involved in field mapping, it is fundamental to develop reliable remote sensing-based methods able to detect ASM. Performance comparison of two convolutional neural networks (CNNs) architectures is provided, along with an in-depth analysis of the predictions when dealing both with dry and rainy seasons. Models’ predictions are compared against an inventory obtained by manual mapping of Sentinel-2 tiles, with the help of multitemporal interpretation of Google Earth imagery. The findings show that this approach can detect ASM in semi-desertic areas starting with a few samples at a low cost in terms of both human and financial resources.

How to cite: Cuevas-González, M., Nava, L., Monserrat, O., Catani, F., and Meena, S. R.: Deep Learning and Sentinel-2 data for artisanal mine detection in a semi-desert area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11409, https://doi.org/10.5194/egusphere-egu22-11409, 2022.

EGU22-11908 | Presentations | GI6.1

Questioning the adequacy of an invasive plant management technique through remote sensing observations 

François Toussaint, Alice Alonso, and Mathieu Javaux

Palo Verde National Park, located in the northwest of Costa Rica, contains a wetland plain of international ecological importance in Central America. It is home of a rich biodiversity and provides vital shelter for over 60 species of migratory and resident birds.

From the 1980’s onward, the wetland landscape has shifted from diverse vegetation and large open water areas to a near monotypic stand of cattail (Typha domingensis). This resulted into a sharp reduction in the number of birds in the area, as many bird species prefer other native plants and open water for feeding, nesting and for shelter. The Fangueo technique, which consists in crushing the plant under water using a tractor equipped with angle-iron paddle wheels has been adopted to reduce the spread of Typha.

This plant management technique typically results in a significant decrease in Typha population in the first year after its implementation, as well as an increase in plant diversity and open water area.

In this study, we used historical Landsat and Sentinel imagery to investigate the medium to long-term impact of Fangueo on vegetation and open water. We found that invasive vegetation regrowth happened faster than previous studies had indicated. The increase in open water areas was therefore short-lived. This result questions the adequacy of this technique for invasive plant management.

This work highlights how crucial simple remote sensing methods can be for assessing the adequacy of supposedly effective environmental management practices, and for informing stakeholders.

How to cite: Toussaint, F., Alonso, A., and Javaux, M.: Questioning the adequacy of an invasive plant management technique through remote sensing observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11908, https://doi.org/10.5194/egusphere-egu22-11908, 2022.

EGU22-12697 | Presentations | GI6.1

Application of the autoregressive integrated moving average (ARIMA) model in prediction of mining ground surface displacement 

Marek Sompolski, Michał Tympalski, Anna Kopeć, and Wojciech Milczarek

Underground mining, regardless of the excavation method used, has an impact on the terrain surface. For this reason, continuous monitoring of the ground surface above the excavations is necessary. Deformations on the ground surface occur with a time delay in relation to the mining works, which poses a risk of significant deformations in built-up areas, leading to building disasters. In addition to monitoring, it is therefore necessary to forecast displacements, which at present is usually done using the empirical integral models, which describes the shape of a fully formed subsidence basin and require detailed knowledge of the geological situation and parameters of the deposit. However, insufficiently precise determination of coefficients may lead to significant errors in calculations. Machine learning can be an interesting alternative to predict ground displacement in mining areas. Machine learning algorithms fit a model to a set of input data so that it best represents all the correlations and trends detected in the set. However, the fitting process must be controlled to avoid overfitting. The validated model can then be used to detect new deformations on the ground surface, categorize the resulting displacements, or predict the value of subsidence. In this case ARIMA model (Auto-Regressive Integrated Moving Average) was used to predict deformation values for single points placed in the centers of the subsidence basins in the LGCB (Legnica-Głogów Copper Belt) area. The InSAR time series calculated using the SBAS method for the years 2016-2021 was used as input data. The results were compared with the persistence model, against which there was an improvement in accuracy of several percentage points.

How to cite: Sompolski, M., Tympalski, M., Kopeć, A., and Milczarek, W.: Application of the autoregressive integrated moving average (ARIMA) model in prediction of mining ground surface displacement, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12697, https://doi.org/10.5194/egusphere-egu22-12697, 2022.

EGU22-12774 | Presentations | GI6.1

Using UAV-based Infrared Thermometry in the identification of gas seeps: a case study from Ciomadul dormant volcano (Eastern Carpathians, Romania) 

Boglarka Kis, Dan Mircea Tămaș, Alexandra Tămaș, and Roland Szalay

In our study, we tested a UAV-based IRT and Structure from Motion (SfM) for the identification of CO2 rich gas emission areas at Ciomadul dormant volcanic area, Eastern Carpathians. Our aim is to demonstrate the efficiency of the identification method providing example from a case-study in the Eastern Carpathians.

The gas emissions from Ciomadul come with high flux and are of magmatic origin, associated with the volcanic activity in the past. We had the following assumptions before performing the measurements with the drone: the temperature of the gas vents is constant, as well as their flux, variability is represented only by the changes in ambient temperature. We had previous knowledge on the temperature of the gas emissions (6 °C), so we chose periods when the ambient temperature is either lower or higher than the temperature of the gas. We performed several field observations with the camera both at daytime and in the evening.

The acquisition of UAV photography was made using a DJI Mavic 2 Enterprise Dual drone. This device is equipped with a 12 MP visual camera (RGB) with a 1/2.3" CMOS sensor. The visual camera has a lens with field of view of approx. 85°, 24 mm (35 mm format equivalent) lens with an aperture of f/2.8. It was also equipped with an Integrated Radiometric FLIR® Thermal Sensor. It is an Uncooled VOx Microbolometer with a horizontal field of view of 57° and f/1.1 aperture, sensor resolution is 160x120 (640x480 image size) and a spectral band of 8-14 μm.

The gas vents were clearly visible on the thermal images, and we discovered additional seeps that were not identified before. Later we confirmed the presence of the gas emissions with in situ measurements on the concentrations of CO2. The visibility of the gas emissions was influenced by parameters like temperature, the orientation of the gas vent, the influence of sunlight, the flux of the gas vent, etc.

 

How to cite: Kis, B., Tămaș, D. M., Tămaș, A., and Szalay, R.: Using UAV-based Infrared Thermometry in the identification of gas seeps: a case study from Ciomadul dormant volcano (Eastern Carpathians, Romania), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12774, https://doi.org/10.5194/egusphere-egu22-12774, 2022.

EGU22-1445 | Presentations | GM6.6

Machine learning for boulder detection in acoustic data 

Peter Feldens, Svenja Papenmeier, Sören Themann, Agata Feldens, and Patrick Westfeld

Sublittoral hard substrates, for example formed by blocks and boulders, are hotspots for marine biodiversity, especially for benthic communities. Knowledge on boulder occurrence is also important for marine and coastal management, including offshore wind parks and safety of navigation. The occurrence of boulders have to be reported by member states to the European Union. Typically, boulders are located by acoustic surveys with multibeam echo sounders and side scan sonars. The manual interpretation of these data is subjective and time consuming. This presentation reports on recent work concerned with the detection of boulders in different acoustic datasets by convolutional neural networks, highlighting current approaches, challenges and future opportunities.

How to cite: Feldens, P., Papenmeier, S., Themann, S., Feldens, A., and Westfeld, P.: Machine learning for boulder detection in acoustic data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1445, https://doi.org/10.5194/egusphere-egu22-1445, 2022.

EGU22-4046 | Presentations | GM6.6

Submarine glacial landscapes of the Western Estonian Shelf and implications for ice-flow reconstruction 

Vladimir Karpin, Atko Heinsalu, and Joonas Virtasalo

Geomorphological studies of the bottom of the Baltic Sea are still scarce and little is directly known about glacial bedforms and the palaeo-ice flow dynamics in the area. However, recently collected high resolution multibeam bathymetric data from the Western Estonian territorial waters and EEZ reveal direct geomorphological evidence of glacial bedforms, such as iceberg scours (ploughmarks) and drumlins, enabling the reconstruction of ice-flow patterns on the Western Estonian shelf.

High-resolution multibeam data reveal widespread linear and curved depressions, interpreted as iceberg scours produced by ploughing and grounding icebergs during and soon after the final ice retreat from the area, approximately around 13.2 to 12.3 kyr BP. We recognize two populations of scours (A and B), formed either on top of the coarse-grained glacial deposits or on top of the superimposed glaciolacustrine and post-glacial sediments exposed on the seafloor. The scours of both populations are on average 780 m long, 54 m wide and 1.6 m deep. The Populations have different average orientations, NE-SW for Population A, and ENE-WSW for Population B.

We also report a well-preserved geomorphological record of streamlined bedforms (mostly drumlins). We identify two diverging flow sets, partially continuing onshore, revealing ice sheet behaviour in the area before the time of Palivere stadial (13.2 kyr BP). The observed ice-flow directions permit refining earlier reconstructions and conclude that there were no significant ice-margin standstills in the area.

How to cite: Karpin, V., Heinsalu, A., and Virtasalo, J.: Submarine glacial landscapes of the Western Estonian Shelf and implications for ice-flow reconstruction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4046, https://doi.org/10.5194/egusphere-egu22-4046, 2022.

The spatial distribution of deep-sea polymetallic nodules (PMN) is of high interest due to increasing global demand in metals (Ni, Co, Cu), and their significant contribution to deep-sea ecology as hard-substrate. The spatial mapping is based on a combination of multibeam echosounders and underwater images in parallel to traditional ground-truth sampling by box coring. The combined analysis of such data has been advanced by using machine learning approaches, especially for automated image analyses and quantitative predictive mapping. However, the presence of spatial autocorrelation (SAC) in PMN distribution has not been extensively studied. While SAC could provide information regarding the patchy distribution of PMN and thus enlighten the variable selection before machine learning modeling, it could also result in an over-optimistic validation performance when not treated carefully. Here, we present a case study from a geomorphologically complex part of the Peru Basin. The local Moran’s I analysis revealed the presence of SAC of the PMN distribution, which can be linked with specific seafloor acoustic and geomorphological characteristics such as aspect and backscatter intensity. A quantile regression forests (QRF) model was developed using three cross-validations (CV) techniques: random-, spatial-, and feature space cluster-blocking. The results showed that spatial block cross-validation is the least unbiased method. Opposite the commonly used random-CV overestimates the true prediction error. QRF predicts well in morphologically similar areas, but the model uncertainty is high in areas with novel feature space conditions. Therefore, there is the need for dissimilarity analysis and transferability assessment even at local scales. Here, we used the recently proposed method “Area of Applicability” to map the geographical areas where feature space extrapolation occurs.

How to cite: Gazis, I.-Z. and Greinert, J.: Machine learning-based modeling of deep-sea polymetallic nodules spatial distribution: spatial autocorrelation and model transferability at local scales, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4495, https://doi.org/10.5194/egusphere-egu22-4495, 2022.

Geogenic reefs are hotspots for benthic organisms including fish. Given their ecosystem importance, the European Union has protected them by law and demands an area-wide mapping. The German federal agency for nature conservation together with scientific experts has lately published a guideline to map reefs in the Baltic Sea. Reef delineation is based on hydroacoustic backscatter mosaics which are divided and interpreted in 50x50 m cells. Each cell is categorized according to the number of boulders present:  none, 1-5, and more than 5 boulders. The categorization is strongly dependent on the data quality, hydroacoustic frequency used and technique of boulder identification (manual or automatic). By comparing data with different frequencies interpreted each manually and automatically we will demonstrate the importance of appropriate data for reef delineation.

How to cite: Papenmeier, S. and Feldens, P.: Hydroacoustic mapping of geogenic reefs, a matter of technique: a practical example from the Baltic Sea, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4684, https://doi.org/10.5194/egusphere-egu22-4684, 2022.

EGU22-4754 | Presentations | GM6.6

Satellite-based coastal bathymetry for annual monitoring on the Mediterranean coast: A case study in the Balearic Islands 

Sandra Viana-Borja, Angels Fernández-Mora, Richard P. Stumpf, Gabriel Navarro Almendros, and Isabel Caballero de Frutos

More than 60% of the world's population lives near coastal zones. These are the most productive as well as the most vulnerable ecosystems in the world. Considering these, among other factors, the study of coastal zones is a matter of vital importance, so that it is necessary to have accurate information for an appropriate coastal management. The shallow bottom topography is considered one of the most critical parameter in coastal studies, because of its significance in different areas such as industry, navigation, defense, aquaculture, tourism, maritime planning, and environmental management, among others. The bathymetry is one of the biggest challenges for coastal engineers and scientists, since it is quiet complex to gather accurate data and to keep it updated because it is a time-consuming and very expensive process. In recent years, satellite-derived bathymetry (SDB) has emerged as an alternative to the most common survey techniques. In the present case study, a recently developed multi-temporal SDB model is applied to overcome problems associated with turbidity and noise effects. This model had been applied in many areas of the Caribbean and EEUU coasts with outstanding performance, providing an accurate bathymetry of the selected areas. In this case, it has been analyzed the bottom topography changes in the Cala Millor beach (Mallorca Island, Spain) between 2018, 2019 and 2020, using images from the Sentinel-2A/B twin mission of the Copernicus Programme. ACOLITE processor has been applied to Sentinel-2 L1A images for atmospheric and sunglint correction. The study aims at demonstrating the effectiveness of this model in the Mediterranean region to show its consistent performance on distinct geographic zones around the world, in addition to improving the results with a composited multi-temporal image selected automatically. Showing the confidence of this capability to be applied in any micro-tidal coast around the world may enhance the existing survey methods and highly contribute to the scientific knowledge by providing scientists and engineers with new science-based tools to better understand coastal zones.

 

How to cite: Viana-Borja, S., Fernández-Mora, A., Stumpf, R. P., Navarro Almendros, G., and Caballero de Frutos, I.: Satellite-based coastal bathymetry for annual monitoring on the Mediterranean coast: A case study in the Balearic Islands, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4754, https://doi.org/10.5194/egusphere-egu22-4754, 2022.

EGU22-8766 | Presentations | GM6.6

How fast do Trawlmarks degenerate? A field study in muddy sediments near Fehmarn Island, German. 

Mischa Schönke, David Clemens, and Peter Feldens

Bottom trawling is a fishing technique in which a net held open by otter boards is dragged across the seafloor to harvest bottom living resources. This action induces high levels of stress to ecosystems by overturning boulders, disturbing and resuspending surface sediment, and plowing scars into the seabed. In the long term the trawling impact on benthic habitats becomes problematic when the time between trawls is shorter than the time it takes for the ecosystem to recover. Since quantitative information on the intensity of bottom fishing is particularly important but rarely available, our study is crucial to reveal the extent and magnitude of the anthropogenic impacts to the seafloor. As part of the MGF Baltic Sea project, a multibeam-echosounder was used to record high-resolution bathymetric data in a small, heavily fished focus area at a 1-year interval. Based on bathymetric data, we present an automated workflow for extracting trawlmark features from seafloor morphology and deriving parameters that qualitatively characterize trawlmark intensity. We also demonstrate how the seafloor surface of an exploited area develops within a year and what can be derived from this for regeneration indicators.

How to cite: Schönke, M., Clemens, D., and Feldens, P.: How fast do Trawlmarks degenerate? A field study in muddy sediments near Fehmarn Island, German., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8766, https://doi.org/10.5194/egusphere-egu22-8766, 2022.

EGU22-9050 | Presentations | GM6.6

Measurements of sediment backscatter in a flume: preliminary experiment results and prospective 

Xavier Lurton, Marc Roche, Thaiënne van Dijk, Laurent Berger, Ridha Fezzani, Peer Fietzek, Sven Gastauer, Mark Klein Breteler, Chris Mesdag, Steve Simmons, and Daniel Parsons

Multifrequency single- and multibeam echosounders are today mature technologies for underwater mapping and monitoring of the seafloor and water column. However, the current scarcity of reference models (checked with field measurement results including detailed geoacoustical groundtruthing) for seafloor backscatter angular response and suspended sediment scattering hampers the generation of applicable information. In this context, defining heuristic models derived from measurements made in a well-controlled environment should optimize the use of backscatter data for ocean observation and management. Such reference measurements could be conducted in flumes designed for hydrodynamics and sedimentology experimental studies, since such facilities constitute well-dimensioned and equipped infrastructures adapted to the deployment of echosounders over controlled sedimentary targets. In order to check the feasibility of this concept in terms of acoustical measurement quality, a preliminary experiment was conducted in the Delta Flume (dimensions 291 x 5 x 9.5 m), as a preparation for more comprehensive systematic measurement campaigns. Multifrequency single- and multibeam echosounder data were recorded from the flume floor at various angles and from in-water fine sand plumes. The results reveal that reverberation caused by the flume walls and infrastructure does not interfere significantly with bottom targets and that fine sand plumes in the water column can be detected and measured for various particle concentrations. Future comprehensive experiments (in preparation) will feature multi-frequency multi-angle measurements both on a variety of sediment types and interface roughness, and on plumes of various sediment grain size, shape and concentration.

How to cite: Lurton, X., Roche, M., van Dijk, T., Berger, L., Fezzani, R., Fietzek, P., Gastauer, S., Klein Breteler, M., Mesdag, C., Simmons, S., and Parsons, D.: Measurements of sediment backscatter in a flume: preliminary experiment results and prospective, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9050, https://doi.org/10.5194/egusphere-egu22-9050, 2022.

EGU22-10268 | Presentations | GM6.6

A New Toolset for Multiscale Seabed Characterization 

Alexander Ilich, Benjamin Misiuk, Vincent Lecours, and Steven Murawski

Terrain attributes are increasingly used in seabed mapping to describe the shape of the seabed. In recent years, many calls have been made to move seabed mapping practices towards multiscale characterization to better capture the natural geomorphic patterns found at different spatial scales. However, the community of practice lacks computationally efficient, user-friendly, and open-source tools to implement multiscale analyses, preventing multiscale analyses from gaining traction for seabed mapping and characterization. Here we present a new R package that enables the calculation of multiple terrain attributes like slope, curvature, and rugosity from bathymetric data. The user-friendly package allows for a repeatable and well-documented workflow that can be run using open-source tools. We also introduce a new measure of rugosity that ensures decoupling from slope. Examples of the performance of the package, including the new rugosity metric, will be presented using bathymetric datasets presenting different characteristics.

How to cite: Ilich, A., Misiuk, B., Lecours, V., and Murawski, S.: A New Toolset for Multiscale Seabed Characterization, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10268, https://doi.org/10.5194/egusphere-egu22-10268, 2022.

EGU22-10426 | Presentations | GM6.6

Bathymetry inversion with optimal Sentinel-2 imagery using random forest modeling 

Sanduni Mudiyanselage, Amr Abd-Elrahman, and Benjamin Wilkinson

Bathymetry inversion using remote sensing techniques is a topic of increasing interest in coastal management and monitoring. Freely accessible Sentinel-2 imagery offers high-resolution multispectral data that enables bathymetry inversion in optically shallow waters. This study presents a framework leading to a generalized Satellite-Derived Bathymetry (SDB) model applicable to vast and diversified coastal regions utilizing multi-date images. A multivariate regression random forest model was used to derive bathymetry from optimal Sentinel-2 images over an extensive 210 km coastal stretch along southwestern Florida (United States). Model calibration and validation were done using airborne lidar bathymetry (ALB) data. As ALB surveys are costly, the proposed model was trained with a limited and practically feasible ALB data sample to expand the model’s practicality. Using multi-image bands as individual features in the random forest model yielded high accuracy with root-mean-square error values of 0.42 m and lower for depths up to 13 m.

How to cite: Mudiyanselage, S., Abd-Elrahman, A., and Wilkinson, B.: Bathymetry inversion with optimal Sentinel-2 imagery using random forest modeling, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10426, https://doi.org/10.5194/egusphere-egu22-10426, 2022.

EGU22-10829 | Presentations | GM6.6

Previously unknown topographic features beneath the Amery Ice Shelf, East Antarctica, revealed by airborne gravity 

Junjun Yang, Jingxue Guo, Jamin S. Greenbaum, Xiangbin Cui, Liangcheng Tu, Lin Li, Lenneke M. Jong, Xueyuan Tang, Bingrui Li, Donald D. Blankenship, Jason L. Roberts, Tas van Ommen, and Bo Sun

The seafloor topography under the Amery Ice Shelf steers the flow of ocean currents transporting ocean heat, and thus is a prerequisite for precise modeling of ice-ocean interactions. However, hampered by thick ice, direct observations of sub-ice-shelf bathymetry are rare, limiting our ability to quantify the evolution of this sector and its future contribution to global mean sea level rise. We estimated the seafloor topography of this region from airborne gravity anomaly using simulated annealing. Unlike the current seafloor topography model which shows a comparatively flat seafloor beneath the calving front, our estimation results reveal a 255-m-deep shoal at the western side and a 1,050-m-deep trough at the eastern side, which are important topographic features controlling the ocean heat transport into the sub-ice cavity. The gravity-estimated seafloor topography model also reveals previously unknown depressions and sills in the middle of the Amery Ice Shelf, which are critical to an improved modeling of the sub-ice-shelf ocean circulation and induced basal melting. With the refined seafloor topography model, we anticipate an improved performance in modeling the response of the Amery Ice Shelf to ocean forcing.

How to cite: Yang, J., Guo, J., Greenbaum, J. S., Cui, X., Tu, L., Li, L., Jong, L. M., Tang, X., Li, B., Blankenship, D. D., Roberts, J. L., van Ommen, T., and Sun, B.: Previously unknown topographic features beneath the Amery Ice Shelf, East Antarctica, revealed by airborne gravity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10829, https://doi.org/10.5194/egusphere-egu22-10829, 2022.

EGU22-11654 | Presentations | GM6.6

Deep Learning for seafloor sediment mapping: a preliminary investigation using U-Net 

Rosa Virginia Garone, Tor Inge Birkenes Lønmo, Frank Tichy, Markus Diesing, Terje Thorsnes, Alexandre Carmelo Gregory Schimel, and Lasse Løvstakken

Knowing the type and distribution of seafloor sediments is crucial for many purposes, including marine spatial planning and nature conservation. Seabed sediment maps are typically obtained by manually or automatically classifying data recorded by swath sonar systems such as multibeam echosounders (MBES), aided with ground-truth data.

While progress has been made to map the seafloor based on acoustic data in an automated way, such methods have not advanced enough to become operational for routine map production in geological surveys. Mapping seafloor sediments is therefore still a manual and partly subjective process, which may imply a lack of repeatability.

In recent years, deep learning using convolutional neural networks (CNNs) has shown great promise for image classification applied in domains such as satellite or biomedical image analysis, and there is an increasing interest in the use of CNNs for seabed image classification.

In this work, we evaluate the performance of semantic segmentation using a U-Net CNN for the purpose of classifying seafloor acoustic images into sediment types.

Our study site is an area of 576 km2, located in the Søre Sunnmøre region, where seafloor sediments have been manually mapped by the Geological Survey of Norway (NGU). For our initial investigation, we simplified the NGU map into two classes – soft sediment and hard substrate – and trained multiple U-Net networks to predict the sediment classes using an MBES bathymetry grid and seabed backscatter image mosaic as source datasets. Our training reference was the expertly delineated sediment map, and the method thus seeks to mimic the human observer. Our initial analysis derived features directly from acoustic backscatter and bathymetry data but also derived slope and hillshade images from the bathymetry grid.

The MBES imagery was pre-processed and divided into patches of 256 m x 256 m (where 1 m = 1 image pixel). We evaluated models using a single input layer, e.g., backscatter mosaic, bathymetry grid, hillshade or slope respectively, and three models that used two input layers, hillshade & depth, hillshade & backscatter, slope & backscatter. Performance was evaluated using the Dice score (DS), a relative measure of overlap between the predicted and reference map.

Interestingly, results showed that for models using a single data source, the hillshade and slope models produced the highest performance with a DS of approximately 0.85, followed by the backscatter model (DS = 0.8) and the depth model with a DS of 0.7. Models using dual data sources showed improved results for the backscatter/slope & depth model (DS = 0.9) while showing a lower DS (0.7) for the hillshade & depth model.

Our preliminary results demonstrate the potential of using a U-Net to classify seafloor sediments from MBES data, thus far using two sediment classes. Assuming here that the human observer has correctly annotated the seabed sediments, such an approach could help to automate seafloor mapping in future applications. Further work will provide an in-depth analysis on feature importance, further improve the models by using additional input layers, and use data where several relevant sediment classes are included.

How to cite: Garone, R. V., Birkenes Lønmo, T. I., Tichy, F., Diesing, M., Thorsnes, T., Schimel, A. C. G., and Løvstakken, L.: Deep Learning for seafloor sediment mapping: a preliminary investigation using U-Net, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11654, https://doi.org/10.5194/egusphere-egu22-11654, 2022.

EGU22-11661 | Presentations | GM6.6

Novel Underwater Mapping Services through European Open Science Cloud 

Konstantinos Karantzalos, Paraskevi Nomikou, Paul Wintersteller, Josep Quintana, Kalliopi Baika, Valsamis Ntouskos, Danai Lampridou, Jafar Anbar, and NEANIAS team members

Seafloor mapping is closely related to studying and understanding the ocean, which has increasingly raised interest in the past years. Coastal management, habitat loss, underwater cultural heritage, natural disasters, marine resources and offshore installations have underlined the need of charting the seabed. This upturn has been encouraged by many national and international initiatives and culminated in the declaration of the Decade of Ocean Science for Sustainable Development (2021-2030) by the United Nations, 2017. 

Novel Underwater cloud services offered through the EC H2020 NEANIAS project support this joint quest by implementing Open Science procedures through the European Open Science Cloud (EOSC). The services produce user-friendly, cloud-based solutions addressing bathymetry processing, seafloor mosaicking and classification. Hence, NEANIAS Underwater services target various end-users, representing different scientific and professional communities by offering three applications.

The Bathymetry Mapping from Acoustic Data (UW-BAT) service provides a user-friendly and cloud-based edition of the well known open-source MB-System, via Jupyter notebooks. This service produces bathymetric grids and maps after processing the data throughοut a flexible and fit-for-purpose workflow by implementing sound speed corrections, applying tides, filters and choosing the required spatial resolution.

The Seafloor Mosaicking from Optical Data (UW-MOS) service provides a solution for representing a large area of the seafloor, in the order of tens of thousands of images, and tackling visibility limitations from the water column. The service performs several steps like camera calibration, image undistortion, enhancement, and quality control. The final product could be a 2D image Mosaic or a 3D model.

The Seabed Classification from Multispectral, Multibeam Data (UW-MM) service focuses on seabed classification by implementing cutting-edge machine learning techniques and at the same time providing a user-friendly framework. The service unfolds within four steps: uploading the data, selecting the desired seabed classes, producing the classification map, and downloading the results.

Therefore, NEANIAS Underwater services exploit cutting-edge technologies providing highly accurate results, regardless of the level of expertise of the end-user, and reducing the time and cost of the processing. Moreover, the accessibility to sophisticated services can simplify and promote the correlation of interdisciplinary data towards the comprehension of the ocean, and the contribution of these innovative services is expected to be of high value to the marine community.

How to cite: Karantzalos, K., Nomikou, P., Wintersteller, P., Quintana, J., Baika, K., Ntouskos, V., Lampridou, D., Anbar, J., and members, N. T.: Novel Underwater Mapping Services through European Open Science Cloud, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11661, https://doi.org/10.5194/egusphere-egu22-11661, 2022.

EGU22-12128 | Presentations | GM6.6

Applying a multi-method framework to analyze the multispectral acoustic response of the seafloor 

Pedro Menandro, Alex Bastos, Benjamin Misiuk, and Craig Brown

Improvements to acoustic seafloor mapping systems have motivated novel benthic geological and biological research. Multibeam echosounders (MBES) have become a mainstream tool for acoustic remote sensing of the seabed, and recently, multispectral MBES backscatter has been developed to characterize the seabed in greater detail, yet methods for the use of these data is still being explored. Here, we evaluate the potential for seabed discrimination using multispectral backscatter data within a multi-method framework. We present a novel MBES dataset acquired using four operating frequencies (170 kHz, 280 kHz, 400 kHz, and 700 kHz) near the Doce River mouth, situated on the eastern Brazilian continental shelf. Image-based and angular range analysis methods were applied to characterize the multifrequency response of the seabed. The large amount of information resulting from these methods confounds a unified manual seabed segmentation solution. The data were therefore summarized using a combination of dimensionality reduction and density-based clustering, enabling hierarchical spatial classification of the seabed with sparse ground-truth.

The use of multispectral technology was fundamental to understanding the acoustic response of each frequency – achieving a benthic prediction in agreement with earlier studies in this region, but providing spatial information at a much greater detail than was previously realized. For most muddy areas, the median uncalibrated backscatter values from the mosaics for all frequencies were low (slightly higher for lower frequencies). The lower frequency was presumably detecting the sub-bottom, while the higher frequency reflected primarily off the surface, potentially indicating a thick muddy deposit in this area. In these regions, the angular response curve shows high backscatter level loss, with a more pronounced backscatter level loss for the higher frequency. Over a sandy high-backscatter feature, results show high scattering across the entire swath; sediments coarser than sand were poorly resolved by comparison. The density-based clustering enabled identification of two well-defined clusters, and at a higher level of detail, the muddy region could be further divided to produce four sub-clusters. Therefore, findings suggested that the multifrequency acoustic data provided greater discrimination of muddy and fine sand sediments than coarser sediments in this area.

Backscatter data has been analyzed in several ways in the context of seafloor classification, namely: visual interpretation of mosaics, textural analysis, image-based analysis, and angular range analysis. Advantages and disadvantages of each make the choice methodology challenging; their combined use may achieve better results via consensus. Several supervised and unsupervised techniques have been applied in seabed classification, including different clustering approaches. Density-based clustering has received little attention for seabed classification, and was successfully applied here to synthesize different approaches into a classified output. Further research on the discrimination power of multispectral backscatter and comparison between clustering techniques will be useful to inform on the application of these approaches for mapping seabed sediments.

How to cite: Menandro, P., Bastos, A., Misiuk, B., and Brown, C.: Applying a multi-method framework to analyze the multispectral acoustic response of the seafloor, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12128, https://doi.org/10.5194/egusphere-egu22-12128, 2022.

EGU22-13349 | Presentations | GM6.6

Global Multi-Resolution Topography (GMRT) Synthesis – Tools and Workflows for Processing, Integrating and Accessing Multibeam Sonar Data 

Vicki Ferrini, John Morton, Hayley Drennon, Rafael Uribe, Emily Miller, Tinah Martin, Frank Nitsche, Andrew Goodwillie, and Suzanne Carbotte

The Global Multi-Resolution Topography (GMRT) Synthesis is a multi-resolution Digital Elevation Model (DEM) developed at the Lamont-Doherty Earth Observatory of Columbia University. The data synthesis is maintained in three projections and is managed with a scalable global architecture and tiling scheme.  Primary content assembled into GMRT includes a curated multibeam bathymetry component that is derived from processed swath files and is gridded at ~100m resolution or better. These data are seamlessly assembled with other publicly available gridded data sets, including bathymetry and topography data at a variety of resolutions.  GMRT delivers the best resolution data that have been curated for a particular area of interest, and allows users to extract custom grids, images, points and profiles.

Most data processing and curation effort for GMRT is focused on cleaning and reviewing ship-based multibeam sonar data to facilitate gridding at their full spatial resolution. In addition to  performing standard data cleaning and applying necessary corrections to data, GMRT tools are used to review and assess swath data in the context of the existing data synthesis. This approach ensures that data are fit for purpose and will integrate well with existing content, and is especially well-suited for ensuring the quality of data acquired during transits. GMRT tools and workflows used for data cleaning and assessment have recently been adapted for distributed use to enable the broader community to leverage this approach, streamlining the data pipeline and ensuring high quality processed swath data can be delivered to public archives. This presentation will include a summary of GMRT tools, opportunities, and lessons learned.

How to cite: Ferrini, V., Morton, J., Drennon, H., Uribe, R., Miller, E., Martin, T., Nitsche, F., Goodwillie, A., and Carbotte, S.: Global Multi-Resolution Topography (GMRT) Synthesis – Tools and Workflows for Processing, Integrating and Accessing Multibeam Sonar Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13349, https://doi.org/10.5194/egusphere-egu22-13349, 2022.

Cold seeps are commonly associated with water column and seabed features. Active seeps form acoustic flares in the water column and can be detected using data from single or multibeam beam echosounders. They may be associated with pockmarks, but the majority of pockmarks on the Norwegian continental shelf have proven to be inactive. Cold seeps are commonly associated with carbonate crust fields exposed at the seabed. 
Studies using multibeam echosounder water column data in the Håkjerringdjupet region, underlain by the petroleum province Harstad Basin, have revealed more than 200 active gas flares related to cold seeps. We have studied the seabed around some of these, using the HUGIN HUS AUV equipped with HiSAS 1030 Synthetic Aperture Sonar (SAS) from Kongsberg. The SAS gave a 2 x 150 m wide swath. The primary product is the sonar imagery with a pixel resolution up to c. 3 x 3 cm. For selected areas, bathymetric grids with 20x20 cm grids were produced, giving unrivalled resolution at these water depths. The carbonate crust fields have normally a characteristic appearance, with a low reflectivity and a rugged morphology compared to the surrounding sediments. 
The interpretation of the acoustic data was verified by visual inspection using the TFish photo system on the AUV, and at a later stage by ROV video footage and physical sampling. The integration of hullborne echosounder data with AUV-mounted acoustic and visual tools provides a very powerful approach for studies of cold seep habitats and related seabed features.
An important conclusion from the study is that many pockmarks are not associated with active gas seeps today, and that many of the presently active gas seeps are associated with carbonate crust fields which are readily identifiable from synthetic aperture sonar data.

How to cite: Thorsnes, T. and Chand, S.: Seabed mapping using Synthetic aperture sonar and AUV - important tools for studies of cold seep habitats, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13440, https://doi.org/10.5194/egusphere-egu22-13440, 2022.

EGU22-3601 | Presentations | GI5.5 | Highlight

Towards an automatic real-time seismic monitoring system for the city of Oslo 

Erik Myklebust, Andreas Köhler, and Anna Maria Dichiarante

Global estimates for future growth indicate that city inhabitation will increase by 13% due to a gradual shift in residence from rural to urban areas. The continuous increase in urban population has caused many cities to upgrade their infrastructures and embrace the vision of a “smart-city”. Data collection through sensors represents the base layer of every smart-city solution. Large datasets are processed, and relevant information is transferred to the police, local authorities, and the general public to facilitate decisions and to optimize the performance of cities in areas such as transport, health care, safety, natural resources and energy. The objective of the GEObyIT project is to provide a real-time risk reduction system in an urban environment by applying machine learning methodologies to automatically identify and categorise different types of geodata, i.e., seismic events and geological structures. The project focusses on the city of Oslo, Norway, addressing the common need of two departments of the municipality, i.e., the Emergency Department and the Water and Sewage Department. In the present work, we focus on passive seismic records acquired with the objective to quickly locate urban events as well as to continuous monitor changes in the near surface. For this purpose, a seismic network of Raspberry Shake 3D sensors connected to GSM modems, to facilitate real-time data transfer, was deployed in target areas within the city of Oslo in 2021. We present preliminary results of three approaches applied to the continuous data: (1) automatic detection of metro trains, (2) automatic identification of outlier events such as construction and mining blasts, and (3) noise interferometry to monitor the near sub-surface in an area with quick clay. We use a supervised method based on convolutional neural networks trained with visually identified seismic signals on three sensors distributed along a busy metro track (1). Application to continuous data allowed us the reliably detect trains as well as their direction, while not triggering other events. Further development of this approach will be useful to either sort out known repeating seismic signals or to monitor traffic in an urban environment. In approach (2) we aim to detect rare or unusual seismic events using an outlier detection method. A convolutional autoencoder was trained to create dense features from continuous signals for each sensor. These features are used in a one-class support vector machine to detect anomalies. We were able to identify a series of construction and mine blasts, a meteor signal as well as two earthquakes. Finally, we apply seismic noise interferometry to close-by sensor pairs to measure temporal variations in the shallow ground (3). We observe clear seismic velocity variations during periods of strong frost in winter 2021/2022. This opens up for the potential to detect also non-seasonal changes in the ground, for example related to instabilities in quick clay deposits located within the city of Oslo.  

How to cite: Myklebust, E., Köhler, A., and Dichiarante, A. M.: Towards an automatic real-time seismic monitoring system for the city of Oslo, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3601, https://doi.org/10.5194/egusphere-egu22-3601, 2022.

Urban forest provides several important ecosystem services to cite residents and city environment, by which most functions were related to trees’ canopy biomass. To understand the dynamics of canopy biomass affecting the ecosystem services, this study applied and compared two approaches in predicting canopy biomass of Koelreuteria elegans street trees in the city of Taipei in Taiwan. The first approach extracted vegetation indices (VI) from time series data of the 2018 Sentinel-2 satellite images, to represent signals of tree canopy variation, including Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), image classification based on VI time series data was processed to extract pixels with high canopy covers, and examined the associated phenological activities. In contrast, the other approach applied a system dynamic model to capture changes of canopy phenological activities in different seasons by factors of canopy size, leaf duration, and phenology events, all controlled by an accumulated temperature function to characterize green up and defoliation mechanisms. The growth temperature and growth rate of new leaves were calibrated with the phenological records. Results found good correlations between satellite-extracted vegetation indices approach and a temperature-driven phenological modelling. Reconstructed by NDVI and EVI, both indices caught the start of spring growth of Koelreuteria elegans in March to a full-sized canopy in April, with the whole growing season extended to the end of September, and a beginning of main defoliation from October to the lowest canopy size in January and February next year. Built from the image classification results for pure canopy cover, the maximum value of NDVI and EVI was 0.443 and 0.486, while the minimum was 0.08 and 0.163, respectively. In comparison, results from the canopy phenological modelling showed similar trends that canopy biomass reached its lowest point in February, entered to a rapid growth phase in March and reached full canopy size in April. Although the canopy phenological model also predicted a main growing season lasted until October, during the defoliation period, the leaves of the Koelreuteria elegans never completely fell off, due to the actual monthly minimum average temperature in the city of Taipei was higher than 10oC as the threshold of the controlled temperature. Based on these results, we suggest that when ground tree survey and inventory data are available, both satellite-extracted vegetation indices and modelling approach can provide useful predictions for landscape planning and urban forestry management.

How to cite: Pan, W.-C. and Cheng, S.-T.: Predicting and comparing canopy biomass by satellite-extracted vegetation indices and a temperature-driven phenological modelling approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7094, https://doi.org/10.5194/egusphere-egu22-7094, 2022.

EGU22-8095 | Presentations | GI5.5

Mining impact in a coal exploitation under an urban area: detection by Sentinel-1 SAR data 

Jose Cuervas-Mons, María José Domínguez-Cuesta, Félix Mateos-Redondo, Oriol Monserrat, and Anna Barra

In this work, the A-DInSAR techniques are applied in Central Asturias (N Spain). In this area, the presence of the most important cities in the region is remarkable, as well as industry and port infrastructures and a dense road network. Moreover, this region is specially known for their historical coal exploitation, which was developed mainly on the Central Coal Basin for almost 2 centuries, and is being abandoned from the beginning of the 21st. The main aim of this study is detecting and analysing deformations associated to this underground coal mining activity. For this, the following methodology was realised: 1) Acquisition and processing of 113 SAR images, provided by Sentinel-1A and B in descending trajectory between January 2018 and February 2020, by means of PSIG software; 2) Obtaining of Line of Sight mean deformation velocity map (in mm year-1) and deformation time series (in mm); 3) Analysis of detected terrain displacements and definition of mining impact. The results show a Velocity Line of Sigh (VLOS) range between -18.4 and 37.4 mm year-1, and accumulated ground displacements of -69.1 and 75.6 mm. The analysis, interpretation and validation of these ground motion allow us to differentiate local sectors with recent deformation related to subsidence and uplift movements with maximum VLOS of -18.4 mm year-1 and 9.5 mm year-1. This study represents an important contribution to improve the knowledge about deformations produced by impact of coal mining activity in a mountain and urban region. In addition, this work corroborates the reliability and usefulness of the A-DInSAR techniques like powerful tools in the study and analysis of geological hazards at regional and local scales for the monitoring and control of underground mining infrastructures.

How to cite: Cuervas-Mons, J., Domínguez-Cuesta, M. J., Mateos-Redondo, F., Monserrat, O., and Barra, A.: Mining impact in a coal exploitation under an urban area: detection by Sentinel-1 SAR data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8095, https://doi.org/10.5194/egusphere-egu22-8095, 2022.

EGU22-8156 | Presentations | GI5.5

Investigating the carbon biogeochemical cycle at Mt Etna 

Maddalena Pennisi, Simone D'Incecco, Ilaria Baneschi, Matteo Lelli, Antonello Provenzale, and Brunella Raco

The continuous acquisition of CO2 soil flux data has been started on Mt Etna in November 2021, with the aim of assessing a first balance between CO2 from volcanic and biological origin. Our long-term goal is an interdisciplinary study of volcanic, biological, ecological, biogeochemical, climatic and biogeographical aspects, including the anthropogenic impact on the environment. All aspects are integrated in the study of the so-called Critical Zone, i.e. the layer between the deep rock and the top of the vegetation where the main biological, hydrological and geological processes of the ecosystem take place. The new research activity at Mt Etna is performed within the framework of the PON-GRINT project for infrastructure enhancement (EU, MIUR), and it adds up to activities going on at Grand Paradiso National Park (Italian Alps), and Ny Alesund (Svalbard, NO, High Arctic) in the framework of the IGG-CNR Critical Zone Observatories.

During the first phase of the project, two fixed stations were installed in two sites at Piano Bello (Valle del Bove, Milo), in an area where the endemic Genista aetnensis grows. An Eddy Covariance system for net CO2 ecosystem exchange measurement and a weather station will be installed in 2022. Carbon stable isotopes data will be acquired periodically using in-situ instrumentation (i.e. Delta Ray).  The installation sites are selected after CO2 soil flux surveys around the volcano using a portable accumulation chamber. The two stations installed at Piano Bello consist of an automatic accumulation chamber fixed to the ground, a mobile lid with a diffusion infrared sensor for measuring CO2, a data logger and a sensor for measuring soil moisture and temperature. The accumulation chambers are programmed to acquire data on ecosystem respiration every hour for all day. Data are transmitted to the IGG data collection center. The new IGG-CNR Mt Etna CZO will contribute investigating CO2 fluxes at the soil-vegetation-atmosphere interface in different geological and environmental contexts. We benefit from the collaboration with the National Institute of Geophysics and Volcanology (INGV), the Ente Parco dell'Etna, and the Dipartimento Regionale dello Sviluppo Rurale e Territoriale di Catania.

How to cite: Pennisi, M., D'Incecco, S., Baneschi, I., Lelli, M., Provenzale, A., and Raco, B.: Investigating the carbon biogeochemical cycle at Mt Etna, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8156, https://doi.org/10.5194/egusphere-egu22-8156, 2022.

Wind turbine (WT) ground motion emissions have a significant influence on sensitive measuring equipment like seismic monitoring networks. WTs permanently excite ground motions at certain constant frequencies due to the eigen modes of the tower and blades as well as the motion of the blades. The emitted waves have frequencies mainly below 10 Hz which are relevant for the observation of, e.g., local tectonic or induced seismicity. Furthermore, frequencies proportional to the blade passing frequency can be observed in ground motion data above 10 Hz, closely linked to acoustic emissions of the turbines. WTs are often perceived negatively by residents living near wind farms, presumably due to low frequency acoustic emissions. Therefore, similarities in ground motion and acoustic data provide constraints on the occurrence of such negatively perceived emissions and possible counter-measures to support the acceptance of WTs.

We study ground motion signals in the vicinity of two wind farms on the Swabian Alb in Southern Germany consisting of three and sixteen WTs, respectively, which are of the same turbine type, accompanied by acoustic measurements and psychological surveys. A part of the measurements is conducted in municipalities near the respective wind farms where residents report that they are affected by emissions. Additional measurements are conducted in the forests surrounding the WTs, and within WT towers. The wind farms are located on the Alb peneplain at 700-800 m height, approximately 300 m elevated compared to the municipalities. Results indicate that WTs are perceived more negatively in the location where the wind farm is closer to the municipality (ca. 1 km) and where other environmental noise sources like traffic occur more frequently. At the location more distant to the WT (ca. 2 km), even though more WTs are installed, residents are affected less. To improve the prediction of ground motion emissions, instruments are set up in profiles to study the amplitude decay over distance, which is linked to the local geology.

This study is supported by the Federal Ministry for Economic Affairs and Energy based on a resolution of the German Bundestag (03EE2023D).

How to cite: Gassner, L. and Ritter, J.: Ground motion emissions due to wind turbines: Results from two wind farms on the Swabian Alb, SW Germany, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8472, https://doi.org/10.5194/egusphere-egu22-8472, 2022.

EGU22-11008 | Presentations | GI5.5

Preliminary Analysis on Multi-Devices Monitoring of Potential Deep-Seated Landslide in Xinzhuang, Southern Taiwan 

Ji-Shang Wang, Tung-Yang Lai, Yu-Chao Hsu, Guei-Lin Fu, Cheng Hsiu Tsai, and Ting-Yin Huang

In-situ monitoring of slope is crucial for recognizing and recording the occurrence of landslide. Figuring out the correlation between monitoring data and hillslope displacement would help early warning for landslide-induced disasters. Xinzhuang potential deep-seated landslide area has been identified by Taiwan executive authority where is located in Kaohsiung City, southern Taiwan, it covers a 10.3 hectares’ area and 20 buildings with an average slope of 22.8 degrees. The lithology of the upper slope is sand-shale interbedded with highly sand contented, which differs from lower slope in shale with mud contented.

For conducting early warning and comprehending displacement of landslide in this study, the monitoring of ground displacement was carried out using the tiltmeter and the GNSS RTK (Real Time Kinematic), and the hydrology data (rainfall and ground water level) were recorded every 10 minutes by automatic gauges. Furthermore, we executed manual borehole inclinometer measurement to obtain the possible sliding position of subsurface.

This study has been conducted for two years, the results shows that (1) The local shallow creep (4-5 meters underground) in the central deep-seated landslide area was recorded by the tiltmeter, GNSS and borehole inclinometer measurement. (2) The groundwater level is the significant factor for displacements of creep in this site. (3) The velocity of the displacement would be accelerated when the groundwater level was higher than 2.1 meters. (4) The 6-hours displacement has a highly correlation with accumulative rainfall and ground water level. Moreover, the results have been applied to the landslide early-warning system of Taiwan authority.

How to cite: Wang, J.-S., Lai, T.-Y., Hsu, Y.-C., Fu, G.-L., Tsai, C. H., and Huang, T.-Y.: Preliminary Analysis on Multi-Devices Monitoring of Potential Deep-Seated Landslide in Xinzhuang, Southern Taiwan, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11008, https://doi.org/10.5194/egusphere-egu22-11008, 2022.

Very often many new construction and operating embankment dams need to be evaluated in terms of the slope stability. The necessity of considering body forces, pore-water pressures, and a variety of soil types in the analysis vitiates the application of methods that are well founded in the mechanics of continua and employ representative constitutive equations.

This study comparing stability analysis using total stress after the end of construction with effective stress couple of years later after the first impounding. Studies have indicated the advantages to be obtained employing an effective stress failure criterion (Bishop, 1952, Henkel and Skempton, 1955 and Bishop, 1960) for analysis and design of embankment dams. Pore-water pressure are determined from piezometer readings during the construction until the dam was operated.

This paper presents the results of stability analysis of embankments dam with both parameters and conditions, resulting that pore water pressures influence slope stability of the embankment.

How to cite: Hartanto, T.: Slope stability analysis of embankment dam under total and effective pore pressure, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11541, https://doi.org/10.5194/egusphere-egu22-11541, 2022.

EGU22-11730 | Presentations | GI5.5

Road surface friction measurement based on intelligent road sensor and machine learning approaches 

Mezgeen Rasol, Franziska Schmidt, and Silvia Ientile

Real prediction of friction coefficient on the road surface is essential in order to enhance the resilience of traffic management procedures for the safety of road users. Critical weather conditions could have a significant impact on the road surface, and decrease the reliable friction coefficient in extreme conditions. Weather parameters are involved in the process of traffic management are water film thickness, ice percentage, pavement temperature, ambient temperature, and freezing point. Smart road monitoring of the road surface friction changes over time means the real-time prediction of the friction coefficient changes in the future based on the intelligent weather road-based sensor is crucial to avoid uncontrolled conditions during extreme weather conditions. For this reason, the use of intelligent data analysis such as machine learning approaches is key in order to provide a holistic robust decision-making tool to support road operators or owners for further consideration of the traffic management procedures. In this study, a machine learning approach is applied to train 18 months of data collected from the real case study in Spain, and results show a good agreement between real friction coefficient and predicted friction coefficient. The trained model has been validated with various cross-validation approaches, and the high accuracy of the model is observed.

This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 769129 (PANOPTIS project).

How to cite: Rasol, M., Schmidt, F., and Ientile, S.: Road surface friction measurement based on intelligent road sensor and machine learning approaches, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11730, https://doi.org/10.5194/egusphere-egu22-11730, 2022.

EGU22-12263 | Presentations | GI5.5

System identification of a high-rise building: a comparison between a single station measuring translations and rotations, and a traditional array approach. 

Yara Rossi, John Clinton, Eleni Chatzi, Cédric Schmelzbach, and Markus Rothacher

We demonstrate that the extended dynamic response of an engineered structure can be obtained from just a single measurement at one position if rotation is recorded in combination with translation. Such a single station approach could save significant time, effort and cost when compared with traditional structural characterization using arrays. In our contribution we will focus on the monitoring of a high-rise building by tracking its dynamic properties, e.g., natural frequencies, mode shapes and damping. We present the results of the system identification for the Prime Tower in Zurich – with a height of 126 m, this concrete frame structure is the third highest building in Switzerland. It has been continuously monitored by an accelerometer (EpiSensor) and a co-located rotational sensor (BlueSeis) located near the building center on the roof for the past year. The motion on the tower roof includes significant rotations as well as translation, which can be precisely captured by the monitoring station. More than 9 natural frequencies, including the first 3 fundamental modes, as well as the next two overtones, where translations are coupled with rotations, are observed between 0.3 – 10 Hz, a frequency band of key interest for earthquake excitation, making an investigation essential. Using temporary arrays of accelerometers located across the roof and along the length of the building to perform a traditional dynamic characterisation, we can compare the array solution with the new single location solution in terms of system identification for the Prime Tower.

How to cite: Rossi, Y., Clinton, J., Chatzi, E., Schmelzbach, C., and Rothacher, M.: System identification of a high-rise building: a comparison between a single station measuring translations and rotations, and a traditional array approach., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12263, https://doi.org/10.5194/egusphere-egu22-12263, 2022.

EGU22-12901 | Presentations | GI5.5

Creating a spatially explicit road-river infrastructure dataset to benefit people and nature 

Rochelle Bristol, Stephanie Januchowski-Hartley, Sayali Pawar, Xiao Yang, Kherlen Shinebayar, Michiel Jorissen, Sukhmani Mantel, Maria Pregnolato, and James White

Worldwide, roads cross most rivers big and small, but if nobody maps the locations, do they exist? In our experiences, the answer is no, and structures such as culverts and bridges at these road-river crossings have gone overlooked in research into the impacts that infrastructure can have on rivers and the species that depend on them. There remains a need for spatially explicit data for road-river crossings as well as identification of structure types to support research and monitoring that guides more proactive approaches to infrastructure management. Our initial focus was on mapping road-river structures in Wales, United Kingdom so to better understand how these could be impacting on nature, particularly migratory fishes. However, as we began developing the spatial dataset, we became aware of broader applications, including relevance to hazard management and movement of people and goods so to support livelihoods and well-being. In this talk, I will discuss our initial approach to tackling this problem in Wales, and how we learned from that experience and refined the approach for mapping in England, including our use of openly available remotely sensed imagery from Google and Ordnance Survey so to ensure the data can be reused and modified by others for their needs and uses. I will present a spatially explicit dataset of road-river structures in Wales, including information about surrounding environmental attributes and discuss how these can help us to better understand infrastructure vulnerability and patterns at catchment and landscape scales. I will discuss the potential for diverse applications of this road-river structure dataset, particularly in relation to supporting real-time monitoring and providing the baseline data needed for any futuer machine learning or computation modelling advances for monitoring road-river structures.

How to cite: Bristol, R., Januchowski-Hartley, S., Pawar, S., Yang, X., Shinebayar, K., Jorissen, M., Mantel, S., Pregnolato, M., and White, J.: Creating a spatially explicit road-river infrastructure dataset to benefit people and nature, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12901, https://doi.org/10.5194/egusphere-egu22-12901, 2022.

EGU22-26 | Presentations | NH6.5

Detection of nonlinear kinematics in InSAR displacement time series for hazard monitoring 

Fabio Bovenga, Alberto Refice, Ilenia Argentiero, Raffaele Nutricato, Davide Oscar Nitti, Guido Pasquariello, and Giuseppe Spilotro

Multi-temporal SAR interferometry (MTInSAR),  allows analysing wide areas, identifying critical ground instabilities, and studying the phenomenon evolution in a long time-scale.  The identification of MTInSAR displacements trends showing non-linear kinematics is of particular interest since they include warning signals related to pre-failure of natural and artificial structures. Recently, the authors have introduced two innovative indexes for characterising MTInSAR time series: one relies on the fuzzy entropy and measures the disorder in a time series [1], the other performs a statistical test based on the Fisher distribution for selecting the polynomial model that more reliably approximate the displacement trend [2].

This work reviews the theoretical formulation of these indexes and evaluate their performances by simulating time series with different characteristics in terms of kinematic (stepwise linear with different breakpoints and velocities), level of noise, signal length and temporal sampling. Finally, the proposed procedures are used for analysing displacement time series derived by processing Sentinel-1 and COSMO-SkyMed datasets acquired over Southern Italian Apennine (Basilicata region), in an area where several landslides occurred in the recent past. The MTInSAR displacement time series have been analysed by using the proposed methods, searching for nonlinear trends that are possibly related to relevant ground instabilities and, in particular, to potential early warning signals for the landslide events. Specifically, the work presents an example of slope pre-failure monitoring on Pomarico landslide, an example of slope post-failure monitoring on Montescaglioso landslide, and few examples of structures (such as buildings and roads) affected by instability related to different causes.

References

[1] Refice, A.; Pasquariello, G.; Bovenga, F. Model-Free Characterization of SAR MTI Time Series. IEEE Geosci. Remote Sens. Lett. 2020, doi:10.1109/lgrs.2020.3031655.

[2] Bovenga, F.; Pasquariello, G.; Refice, A. Statistically‐based trend analysis of mtinsar displacement time series. Remote Sens. 2021, doi:10.3390/rs13122302.

Acknowledgments

This work was supported in part by the Italian Ministry of Education, University and Research, D.D. 2261 del 6.9.2018, Programma Operativo Nazionale Ricerca e Innovazione (PON R&I) 2014–2020 under Project OT4CLIMA; and in part by Regione Puglia, POR Puglia FESR-FSE 204-2020 - Asse I - Azione 1.6 under Project DECiSION (p.n. BQS5153).

How to cite: Bovenga, F., Refice, A., Argentiero, I., Nutricato, R., Nitti, D. O., Pasquariello, G., and Spilotro, G.: Detection of nonlinear kinematics in InSAR displacement time series for hazard monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-26, https://doi.org/10.5194/egusphere-egu22-26, 2022.

EGU22-935 | Presentations | NH6.5

The value of InSAR Coherence in TanDEM-X and Sentinel-1 for monitoring world’s forests 

Paola Rizzoli, José-Luis Bueso-Bello, Ricardo Dal Molin, Daniel Carcereri, Carolina Gonzalez, Michele Martone, Luca Dell'Amore, Nicola Gollin, Pietro Milillo, and Manfred Zink

Covering about 30 percent of the Earth’s surface, forests are of paramount importance for the Earth’s ecosystem. They act as effective carbon sinks, reducing the concentration of greenhouse gas in the atmosphere, and help mitigating climate change effects. This delicate ecosystem is currently threatened and degraded by anthropogenic activities and natural hazards, such as deforestation, agricultural activities, farming, fires, floods, winds, and soil erosion. In an era of dramatic changes for the Earth’s ecosystems, the scientific community urgently needs to better support public and societal authorities in decision-making processes. The availability of reliable, up-to-date measurements of forest resources, evolution, and impact is therefore of paramount importance for environmental preservation and climate change mitigation.

In this scenario, Synthetic Aperture Radar (SAR) systems, thanks to their capability to operate in presence of clouds, represent an attractive alternative to optical sensors for remote sensing over forested areas, such as tropical and boreal forests, which are hidden by clouds for most of the year.

In this work, we will investigate the potential of SAR interferometry (InSAR) for mapping forests worldwide and retrieve important biophysical parameters, such as canopy height and above ground biomass. We will compare pros and cons of single-pass (bistatic) versus repeat-pass InSAR, discussing their main peculiarities and limitations. In particular, we will concentrate on the analysis of the interferometric coherence and on the relationship between volume and temporal decorrelation with respect to forest parameters estimation. We will present the work done at DLR for mapping forests worldwide at high spatial resolution using the TanDEM-X bistatic coherence, together with the potential of Sentinel-1 InSAR time-series for a regular monitoring of vegetated areas. We will discuss the algorithms which currently under development for the estimation of above ground biomass, by fusion of InSAR and multi-spectral optical data, based on the latest advances in the field of artificial intelligence and, in particular, of deep learning, presenting the first promising results for a more effective exploitation of current EO datasets.

 

How to cite: Rizzoli, P., Bueso-Bello, J.-L., Dal Molin, R., Carcereri, D., Gonzalez, C., Martone, M., Dell'Amore, L., Gollin, N., Milillo, P., and Zink, M.: The value of InSAR Coherence in TanDEM-X and Sentinel-1 for monitoring world’s forests, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-935, https://doi.org/10.5194/egusphere-egu22-935, 2022.

EGU22-1327 | Presentations | NH6.5

Considerations on regional continuous Sentinel-1 monitoring services over three different regions 

Matteo Del Soldato, Pierluigi Confuorto, Davide Festa, Silvia Bianchini, and Federico Raspini

In 2016, a first worldwide continuous monitoring was proposed and implemented over the Tuscany Region (central Italy). It was the first application of SAR (Synthetic Aperture Radar) images for continuous monitoring of on-going ground deformations and, thanks to a PS (Permanent Scatterers) time-series data-mining for identifying changes in the trend, i.e. sudden accelerations or decelerations. The data-mining algorithm was devoted to automatically recognize trend variations higher than a velocity threshold in a determined time span. The continuous monitoring approach benefits from the launch, in 2014, of the Sentinel-1 constellation that allows having a constant flux of images every 12 days (halved to 6 days since 2016 considering the twin satellite at 180° on the same orbit). Two years after Tuscany, in April 2018, the Valle d’Aosta Region, north-western Italy, implemented a similar system to monitor its territory. The challenge was to apply the same approach, with very few changes adopted, in a region with completely different geological and geomorphological features, also considering the snow and glacial covering in winter. In fact, the Tuscany territory is characterized by wide plains, gentle slopes, and mountainous ridges limited to the eastern border in concomitants with the Northern Apennines. Consequently, the ground deformation phenomena in Tuscany are related to active and dormant landslides and subsidence phenomena, mainly due to groundwater extraction and, less commonly, geothermal activity. Valle d’Aosta Region, on the contrary, is almost all characterized by steep slopes with a central close valley. For this reason, the ground deformations to recognize and monitor are almost totally related to landslides, DSGSDs (deep-seated gravitational slope deformation) or rock glaciers. Then, a year later, in July 2019, the continuous monitoring was activated also over the Veneto Region, North-East of Italy. Its territory has partially similar characteristics to Tuscany, in the southern portion, and to the Valle d’Aosta features, in the northern part. Considering the geological and geomorphological properties, the detected ground deformations from Veneto Region share many similarities with the ones from the other two regions. These three laboratories were critically investigated, and after one-year of life, the benefits and the drawbacks of this approach over different environments were highlighted. For all the regions, separately (i) the spatial distribution of the anomalies regions, considering the slope, the aspect, the land cover, and the height, (ii) the persistency of the anomalies along time, (iii) and the correspondence between highlighted moving areas and known inventories, were investigated. At the end, considerations about the benefits evidenced by the use of this approach, considering also the good feedback of the regional administrative personnel, and the required improvements were critically taken into account.

How to cite: Del Soldato, M., Confuorto, P., Festa, D., Bianchini, S., and Raspini, F.: Considerations on regional continuous Sentinel-1 monitoring services over three different regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1327, https://doi.org/10.5194/egusphere-egu22-1327, 2022.

EGU22-1467 | Presentations | NH6.5

Land subsidence hotspots in Central Mexico: from Sentinel-1 InSAR evidence to risk maps 

Francesca Cigna and Deodato Tapete

The use of satellite Interferometric Synthetic Aperture Radar (InSAR) for land subsidence assessment is already a well established scientific research approach. Although several studies analyze subsidence patterns via integration of InSAR output maps with geospatial layers depicting hazard factors or elements at risk (e.g. surface and bedrock geology, cadastral and infrastructure maps), still limited is the body of literature attempting to generate value-added products. These not only have the potential to be used by stakeholders in urban planning, but also can be updated as new InSAR data are made available. With this scope in mind, this work presents the experience gained across Central Mexico, where land subsidence due to groundwater resource overexploitation is a pressing issue affecting many urban centers and expanding metropolises. Groundwater availability and aquifer storage changes provided by the National Water Commission are analyzed in relation to surface deformation data from wide-area surveys based on InSAR. The Parallel Small BAseline Subset (P-SBAS) method integrated in ESA’s Geohazards Exploitation Platform (GEP) is used to process Sentinel-1 IW big data stacks over a region of 550,000 km2 encompassing the whole Trans-Mexican Volcanic Belt (TMVB) and several major states, including Puebla, Federal District, México, Hidalgo, Querétaro, Guanajuato, Michoacán, Jalisco, San Luis Potosí, Aguascalientes and Zacatecas. A number of hotspots affected by present-day subsidence rates of several cm/year are identified across the TMVB, with extents ranging from localized bowls up to whole valleys or metropolitan areas spanning hundreds of square kilometers. Surface faulting hazard and induced risk on urban properties are assessed and discussed with a focus on: (i) Mexico City metropolitan area, one of the most populated and fastest sinking cities globally (up to −40 cm/year vertical, and ±5 cm/year E-W rates) [1]; (ii) the state of Aguascalientes, where a structurally-controlled fast subsidence process (−12 cm/year vertical, ±3 cm/year E-W) affects the namesake valley and capital city [2]; and (iii) the Metropolitan Area of Morelia, a rapidly expanding metropolis where population doubled over the last 30 years and a subsidence-creep-fault process has been identified (−9 cm/year vertical, ±1.7 cm/year E-W) [3]. InSAR results and the derived risk maps prove valuable not only to constrain the land deformation process at the hotspots, but also to quantify properties and population at risk, hence an essential knowledge-base for policy makers and regulators to optimize groundwater resource management, and accommodate existing and future water demands.

 

[1] Cigna F., Tapete D. 2021. Present-day land subsidence rates, surface faulting hazard and risk in Mexico City with 2014-2020 Sentinel-1 IW InSAR. Remote Sensing of Environment, 253, 112161, https://doi.org/10.1016/j.rse.2020.112161

[2] Cigna F., Tapete D. 2021. Satellite InSAR survey of structurally-controlled land subsidence due to groundwater exploitation in the Aguascalientes Valley, Mexico. Remote Sensing of Environment, 254, 112254, https://doi.org/10.1016/j.rse.2020.112254

[3] Cigna F., Tapete D. 2022. Urban growth and land subsidence: Multi-decadal investigation using human settlement data and satellite InSAR in Morelia, Mexico. Science of the Total Environment, 811, 152211. https://doi.org/10.1016/j.scitotenv.2021.152211

How to cite: Cigna, F. and Tapete, D.: Land subsidence hotspots in Central Mexico: from Sentinel-1 InSAR evidence to risk maps, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1467, https://doi.org/10.5194/egusphere-egu22-1467, 2022.

EGU22-1690 | Presentations | NH6.5

Tools for supporting Sentinel-1 data interpretation: the coast of Granada (Spain) 

Oriol Monserrat, Anna Barra, Cristina Reyes-Carmona, Rosa Maria Mateos, Jorge Pedro Galve, Roberto Tomas, Gerardo Herrera Herrera, Marta Béjar Bejar, José Miguel Azañón, Jose Navarro, and Roberto Sarro

In the last few years, satellite interferometry (InSAR) has become a consolidated technique for the detection and monitoring of ground movements. InSAR based techniques allows to process large areas providing a high number of displacement measurements with low cost. However, the outputs provided by such techniques are usually not easy, hampering the interpretation and time-consuming. This is critical for users who are not familiar with radar data. European Ground Motion Service (Copernicus) is a new public service that will bring a step forward in this context. However, the capability of exploiting it will still rely on the user experience. In this context, the development of methodologies and tools to automatize the information retrieval and to ease the results interpretation is a need to improve its operational use. Here we propose a set of tools and methodologies to detect and classify Active Deformation Areas, and to map the potential damages to anthropic elements, based on differential displacements. We present the results achieved in the coast of Granada, which is strongly affected by slope instabilities. The methodology is applied at a regional scale and allows to go to a detailed local scale of analysis. The presented results have been achieved within the framework of the Riskcoast Project (financed by the Interreg Sudoe Program through the European Regional Development Fund (ERDF)).

 

How to cite: Monserrat, O., Barra, A., Reyes-Carmona, C., Mateos, R. M., Galve, J. P., Tomas, R., Herrera, G. H., Bejar, M. B., Azañón, J. M., Navarro, J., and Sarro, R.: Tools for supporting Sentinel-1 data interpretation: the coast of Granada (Spain), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1690, https://doi.org/10.5194/egusphere-egu22-1690, 2022.

EGU22-2842 | Presentations | NH6.5

Exploiting Sentinel-1 InSAR capabilities for studying the land subsidence process in an urban area 

Alessandro Zuccarini, Benedikt Bayer, Silvia Franceschini, Serena Giacomelli, Gianluigi Di Paola, and Matteo Berti

Since the beginning of the 1960s, the urban area of Bologna has experienced land subsidence due to excessive groundwater withdrawals. Sinking reached its peak in the 70s of the last century when the subsidence rate attained the maximum value of about 10 cm/year, and significant damages to structures and infrastructures occurred. This process has been intensively monitored over the years, and extensive ground displacement data were collected employing various increasingly sophisticated techniques, ranging from topographic levelling to GNSS surveys and, since 1992, to satellite interferometry. Satellite data, in particular, allowed an accurate reconstruction of the land subsidence process. The available interferometric data are the results of three different SAR campaigns undertaken by local authorities in which the PSInSAR technique was adopted: 1992 – 2000 (ERS), 2002 – 2006 (ENVISAT) and 2006 – 2011 (RADARSAT). Within this work, a new InSAR survey from the free SENTINEL1 2014 – 2020 ascending and descending orbits data was undertaken by the UniBo spin-off “Fragile”. The software GMTSAR was used to process each interferogram and then a Small Baseline (SBAS) approach was followed to resolve the ground displacements over time. Great attention was paid to the choice of reference pixels on the existing buildings and structures, in order to maximise their density in the study area, and to the definition of the considered time span ranging from 6 to 365 days, allowing to analyse both quicker and slower ground movements. Compared to previous surveys, the displacement map obtained by Sentinel has a much higher spatial and temporal resolution, thus leading to a detailed interpretation of the ongoing subsidence. Results show that the displacement field well agrees with the 3D geological model of the area and that the temporal evolution of the subsidence rate nicely matches the piezometric level and groundwater pumping temporal series.

How to cite: Zuccarini, A., Bayer, B., Franceschini, S., Giacomelli, S., Di Paola, G., and Berti, M.: Exploiting Sentinel-1 InSAR capabilities for studying the land subsidence process in an urban area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2842, https://doi.org/10.5194/egusphere-egu22-2842, 2022.

Multi-temporal interferometric synthetic aperture radar (InSAR) algorithms represent nowadays mature tools to analyze the Earth’s ground deformation with high accuracy. Among them, a significant role is played by those algorithms based on the use of small-baseline (SB) multi-look interferograms, which are less affected by decorrelation noise artefacts. Recently, there is a great concern on the studying the sources of some inconsistencies in the InSAR products (i.e., ground deformation time-series and mean deformation velocity maps) that happen when sets of multi-look SAR interferograms with very short temporal baselines are processed, compared to those obtained using interferograms with longer temporal baselines. Concerning the interferometric SAR analyses for the study of the Earth's surface displacements, such spurious signals lead to systematic biases that, if not adequately compensated for, might lead to unreliable InSAR ground displacement products.

In this study, we propose a methodology to estimate and correct a set of multi-look SB interferograms that is based on computing and analyzing sets of (wrapped) non-closure phase triplets. The developed phase estimation method works on every single SAR pixels independently, assuming the (unknown) phase bias signal could be approximated as the sum of a constant phase velocity term v and a time-dependent (i.e., dependent on the interferograms temporal baseline) phase velocity difference terms Δv(Δti ), where Δti is the temporal baseline of the generic i-th interferogram. Once the whole set of triplets that could be formed using short baseline ML interferograms is identified, and considering the mathematical properties of the triplets non-closure phases, we can write an overdetermined system of linear equations, where the known terms are the measured wrapped non-closure phases over the set of identified triplets, namely ΔΦtriplets , and the unknowns are the temporal-baseline-dependent phase velocity difference terms Δv . For example, considering the Sentinel1-A/B sensors, the temporal baseline is sampled with an atomic sampling time of six days; accordingly, if we accept, for instance, a threshold of 96 days for the maximum allowed temporal baseline of the selected SB interferograms, we have 16 unknowns to be estimated. Once the linear system is solved in the Least-Squares sense, the phase biases at the different temporal baselines, namely ΔΦbias , are iteratively retrieved by integrating the phase acceleration terms, assuming as the initial condition that the phase bias at the maximum considered temporal baseline is zero, that is Δφbiasmax_baseline = 0.

Preliminarily experiments, performed on sets of Sentinel1-A/B SAR data in different geo-morphological conditions, demonstrate the effectiveness of the developed methodology. Additionally, we performed some simulations and experiments to test the validity of an extension of the developed method to the non-stationary case, e.g., when the phase bias signals depend on the specific single time acquisitions of the SAR images involved in the SB interferograms generation, and not only on their temporal baselines. Our work is propaedeutic for further investigations aiming at retrieving/analyzing the ground properties of the imaged targets on the terrain, such as the soil moisture content or other local ground properties that are usually not considered appropriately by conventional InSAR analyses.

How to cite: Falabella, F. and Pepe, A.: A Method for the Correction of Non-Closure Phase Artefacts in Triplets of Multi-look SAR Interferograms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4919, https://doi.org/10.5194/egusphere-egu22-4919, 2022.

The interest for using Interferometric Synthetic Aperture Radar (InSAR) for ground motion detection and monitoring is rapidly increasing, thanks to the Copernicus Senetinel-1 satellites which cover relatively large areas with a 6-days revisit time. Ground motion of many locations, especially urban areas around the world have been studied using Sentienl-1 data and the rate and distribution of the ground movements have been reported. For Sweden, for example, Fryksten and Nilfouroushan (2019) and Gido et al. (2020) studied the active ground subsidence in Uppsala and Gävle cities using the Senetinel-1 data collected between 2015-2020. The Persistent Scatterer Interferometry (PSI) technique was used to estimate the subsidence rate and the results were validated with the help of precise levelling data and correlated with the geological observations. Today, fortunately, we have the nationwide GMS of Sweden (https://insar.rymdstyrelsen.se) covering almost the entire country, which provides an opportunity to compare and cross-check the results of this new service with previous studies, for example the ones reported for Uppsala and Gävle cities. The temporal coverage of satellite data used for the GMS of Sweden has an overlap with the data used in previous studies for Uppsala and Gävle cities, and the same PSI technique has been used to generate the displacement map and time series.

In this study, we used the previous PSI results of Uppsala and Gävle cities to validate the newly launched nationwide GMS of Sweden. The Line Of Sight (LOS) displacement time-series at some deforming locations  were compared for both PSI-results. Although the number and imaging date of Senetinel-1 data, and the parameters used for PSI processing are not completely the same, the compared results show a good agreement between corresponding studies on the localization and rate of the subsidence in those two cities in last  ~5 years. The validation phase of the new GMS of Sweden is in progress and our study shows the promising results, at least for urban areas in those two cities.  

References

Fryksten J., Nilfouroushan F., Analysis of Clay-Induced Land Subsidence in Uppsala City Using Sentinel-1 SAR Data and Precise Leveling. Remote Sens. 2019, 11, 2764. https://doi.org/10.3390/rs11232764

Gido N.A.A., Bagherbandi M., Nilfouroushan F., Localized Subsidence Zones in Gävle City Detected by Sentinel-1 PSI and Leveling Data. Remote Sens. 2020, 12, 2629. https://doi.org/10.3390/rs12162629

How to cite: Nilfouroushan, F., Gido, N. A. A., and Darvishi, M.: Cross-checking of the nationwide Ground Motion Service (GMS) of Sweden with the previous InSAR-based results: Case studies of Uppsala and Gävle Cities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5293, https://doi.org/10.5194/egusphere-egu22-5293, 2022.

EGU22-5719 | Presentations | NH6.5

Ground deformation time series prediction based on machine learning 

Carolina Guardiola-Albert, Héctor Aguilera, Juliana Arias Patiño, Javier Fullea Urchulutegui, Pablo Ezquerro, and Guadalupe Bru

The problem of predicting terrain deformation time series from radar interferometry (InSAR) data is one of the biggest current challenges for the prevention and mitigation of the impact of geological risks (e.g. earthquakes, volcanoes, subsidence, slope landslides) that affect both urban (e.g. building movement) and non-urban areas. Generating spatio-temporal alert systems on the processes of deformation of the terrain based on predictive models is one of the great current challenges in the face of the prevention and management of geological risks. Within machine learning techniques, deep learning offers the possibility of applying prediction models of deformation time series on images using convolutional neural networks (Ma et al., 2020).

The objective of the present study is to develop a methodology to obtain predictive models of time series of terrain deformation from InSAR images using machine learning algorithms (e.g. deep convolutional neural networks). Data to train the algorithm will be time series of terrain deformation contained in InSAR images processed by the Geological Survey of Spain (IGME-CSIC). Different architectures and parameterizations of machine learning will be tested.

This work is performed within the framework of the SARAI Project PID2020-116540RB-C22 funded by MCIN/ AEI /10.13039/501100011033.

Reference:

Ma, P., Zhang, F., Lin, H. (2020). Prediction of InSAR time-series deformation using deep convolutional neural networks. Remote Sensing Letters, 11:2, 137-145.

 

How to cite: Guardiola-Albert, C., Aguilera, H., Arias Patiño, J., Fullea Urchulutegui, J., Ezquerro, P., and Bru, G.: Ground deformation time series prediction based on machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5719, https://doi.org/10.5194/egusphere-egu22-5719, 2022.

EGU22-6040 | Presentations | NH6.5 | Highlight

A first appraisal of the European Ground Motion Service 

Lorenzo Solari, Michele Crosetto, Joanna Balasis-Levinsen, Luke Bateson, Nicola Casagli, Valerio Comerci, Luca Guerrieri, Michaela Frei, Marek Mróz, Dag Anders Moldestad, Anneleen Oyen, and Henrik Steen Andersen

Satellite interferometry (InSAR) is a reliable and proven technique to monitor and map geohazards over wide areas. In the last years, InSAR is increasingly becoming an everyday tool for geoscientific and applicative analyses; many different users, ranging from academia to the industry, work and rely on InSAR products.

The European Ground Motion Service (EGMS) was conceived and is being implemented as a direct response to growing user needs. The EGMS is implemented under the responsibility of the European Environment Agency in the frame of the Copernicus Programme. The EGMS products are part of the portfolio of the Copernicus Land Monitoring Service. The EGMS provides consistent, regular, standardized, harmonized, and reliable information regarding natural and anthropogenic ground motion phenomena over the Copernicus Participating States and across national borders, with millimeter accuracy. The EGMS distributes three levels of products: (i) basic, i.e. line of sight (LOS) velocity maps in ascending and descending orbits referred to a local reference point; (ii) calibrated, i.e. LOS velocity maps calibrated with a geodetic reference network (a velocity model derived from thousands of global navigation satellite systems time series is used for calibration so that measurements are no longer relative to a local reference point) and (iii) ortho, i.e. components of motion (horizontal and vertical) anchored to the reference geodetic network. The products are generated from the multi-temporal interferometric analysis of Sentinel-1 images in ascending and descending orbit at full resolution.  The data is available and accessible to all and free of charge through a dedicated viewer and download interface.

The accessibility to EGMS accurate and validated interferometric data offers the geoscientific and professional communities the opportunity to study geohazards at the European level, including difficult-to-reach areas or where the availability of ground motion data has so far been scarce or null. The EGMS provides, for example, information useful for the identification and monitoring of slow-moving landslides, natural subsidence, or subsidence due to groundwater exploitation or underground mining activities and volcanic unrest. In addition, the Service establishes a baseline for studies dedicated to localized deformation affecting buildings and infrastructure in general. This presentation will offer a first evaluation of the EGMS products under geoscientific aspects. Case studies from different European environmental contexts will be shown to demonstrate how the EGMS products can be successfully used for geohazards-related studies.

How to cite: Solari, L., Crosetto, M., Balasis-Levinsen, J., Bateson, L., Casagli, N., Comerci, V., Guerrieri, L., Frei, M., Mróz, M., Moldestad, D. A., Oyen, A., and Andersen, H. S.: A first appraisal of the European Ground Motion Service, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6040, https://doi.org/10.5194/egusphere-egu22-6040, 2022.

EGU22-6544 | Presentations | NH6.5 | Highlight

Hazard assessment with SAR – What to expect from the NISAR mission 

Cathleen Jones, Karen An, and Scott Staniewicz

NASA’s NISAR mission, expected to launch in early 2023, will provide SAR observations of nearly all Earth’s land surfaces and selected ocean and sea ice areas on both ascending and descending orbits at a 12-day orbit repeat interval.  In this talk, mission plans to support both sustained and event-driven observations for hazard assessment are presented.  The NISAR satellite will carry both L- and S-band instruments, with the L-band instrument providing the near-global coverage and the S-band acquisitions concentrated in southern Asia and the polar regions.  In addition, the mission system will be capable of accepting and implementing requests for rapid processing to support disaster response.  Most land observations are part of the standard observation plan, so requested scenes will be marked for rapid processing and delivery, with the goal of providing information within hours of acquisition.  In the event that new acquisitions are needed, e.g., over the ocean as major tropical storms develop, the instrument can be retasked to acquire new scenes.

In addition, we present information about efforts on the part of the mission to enable realistic simulation of NISAR’s capabilities across a broad range of science and applications topics.  To that end, L-band quad-polarimetric and repeat pass SAR data acquired with the airborne UAVSAR instrument, which has ~3-m single look resolution, has been processed to be ‘NISAR-like,’ with the noise level and spatial resolution of NISAR’s planned acquisition modes.  To date, more than 400 NISAR-like products from 70 different UAVSAR scenes acquired in North America and Greenland have been produced, and the UAVSAR project is continuing to generate more products specifically to support hazard assessment for fires and landslides.  Examples of anticipated NISAR performance will be shown with comparison to results using the full resolution UAVSAR products. 

This work was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA.

How to cite: Jones, C., An, K., and Staniewicz, S.: Hazard assessment with SAR – What to expect from the NISAR mission, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6544, https://doi.org/10.5194/egusphere-egu22-6544, 2022.

Slow-moving landslides are hydrologically driven and respond to changes in precipitation over daily to decadal timescales. Open-access satellite InSAR data products, which are becoming increasingly common, can be used to investigate landslides (and other ground surface deformation) over large regions. Here we use standardized open-access satellite radar interferometry data processed by the Advanced Rapid Imaging and Analysis (ARIA) team at NASA’s Jet Propulsion Laboratory to identify 247 active landslides in California, USA. These landslides occur in both wet and dry climates and span more than ~2 m/yr in mean annual rainfall. We quantify the sensitivity of 38 landslides to changes in rainfall, including a drought and extreme rainfall that occurred in California between 2015 and 2020. Despite the large differences in climate, we found these landslides exhibited surprisingly similar behaviors and hydrologic sensitivity, which was characterized by faster (slower) than normal velocities during wetter (drier) than normal years. Our study documents the first application of open-access standardized InSAR products from ARIA to identify and monitor landslides across large regions. Due to the large volume of open-access InSAR data that is currently available, and will continue to increase with time, especially with the upcoming launch of the NASA-ISRO SAR (NISAR) satellite, standardized InSAR products will become one of the primary ways to deliver InSAR data to the broader scientific community. Thus, it is important to continue to explore new approaches to analyze these InSAR products for scientific research.

How to cite: Handwerger, A., Fielding, E., Sangha, S., and Bekaert, D.: Tracking slow-moving landslides over large regions using open-access standardized InSAR products produced by the Advanced Rapid Imaging and Analysis (ARIA) Center for Natural Hazards project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6817, https://doi.org/10.5194/egusphere-egu22-6817, 2022.

The paper presents the results of long-term terrain subsidence monitoring in the mining area of the Upper Silesian Coal Basin (USCB) in Poland using Interferometry Synthetic Aperture Radar (InSAR), supplemented with differential analysis of digital elevation models. The work included analysis of mining-induced subsidence based on three archival surface models: historical terrain model obtained from the digitization of Messtischblatt topographic maps, representing the surface in 1919-1944; numerical terrain model DTED, derived from the vectorization of diaposites of topographic maps from the 90s of the twentieth century; LIDAR digital terrain model from 2013. Archival analyses were complemented by the newest PSInSAR database of Sentinel-1 data, processed for the entire area of USCB. The data covered a period of 6 years (October 26, 2014 - June 26, 2020), in which a total of 260 scenes from 124 descending paths were used. In the time domain, data were recorded at intervals of 12 days (for one Sentinel-1 satellite) or every 6 days for the full Sentinel-1 A / B constellation. The entire collection includes 8,139,901 PS points over 6,620 km2, giving an average density of about 1230 PS /sq km. The dataset enabled the analysis of contemporary vertical land movements. This huge set of various data was used to analyze the long-term influence of mining in the area broken down into time intervals, collectively covering the period from the mid-twentieth century to 2020. As a result of the analyzes, zones of mining-induced subsidence were developed, where the terrain surface was systematically changed in individual years. The data allowed for over 600 sq km identification under the influence of exploitation. Subsidence areas were matched with topographic data such as buildings and roads to estimate the effect of subsidence on urban areas. The work shows the great advantage of remote monitoring methods, which is the possibility of showing the long-term environmental impact to a large extent. The use of both historical and the latest data allowed for a comprehensive analysis of changes on the surface of the area now and in the past.

How to cite: Przyłucka, M., Perski, Z., and Kowalski, Z.: Long-term analysis of the environmental impact of mining in the Upper Silesia Coal Basin area based on historical and the latest remote sensing data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7159, https://doi.org/10.5194/egusphere-egu22-7159, 2022.

EGU22-8397 | Presentations | NH6.5

P-band SAR for deformation surveying: advantages and challenges 

Yuankun Xu, Zhong Lu, and Jin-Woo Kim

To date, mainstream SAR (Synthetic Aperture Radar) systems dominantly operate in X/C/L bands (wavelengths of 3.1–24.2 cm), which commonly experience low coherence and thereby degraded InSAR accuracy over densely vegetated terrains. The long wavelength (69.7 cm) P-band SAR, in contrast, holds the potential to address this challenge by penetrating through dense forests to collect highly coherent data takes. Here, we experimented using the NASA JPL (Jet Propulsion Laboratory)’s P-band AirMOSS (Airborne Microwave Observatory of Subcanopy and Subsurface) radar system to acquire repeat-pass SAR data over diverse terrains (14 flight segments) in Washington, Oregon, and California (USA), and comprehensively evaluated the performance of P-band InSAR for ground deformation surveying. Our results show that the AirMOSS P-band InSAR could retain coherence two times as high as the L-band satellite ALOS-2 (Advanced Land Observing Satellite-2) data, and was significantly more effective in discovering localized geohazards that were unseen by the ALOS-2 interferograms in forested areas. Additionally, P-band InSAR could better avoid phase aliasing to resolve high-gradient deformation. However, despite these advantages, P-band InSAR were less sensitive to subtle deformation than X/C/L band radars and faced similar challenges posed by waterbodies, thick snow covers, shadow and layover effects, and the side-looking configuration. Overall, our results suggest that P-band InSAR could be a revolutionary tool for measuring relatively high-gradient deformation under dense forest canopies.   

How to cite: Xu, Y., Lu, Z., and Kim, J.-W.: P-band SAR for deformation surveying: advantages and challenges, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8397, https://doi.org/10.5194/egusphere-egu22-8397, 2022.

EGU22-9194 | Presentations | NH6.5

The importance of InSAR data post-processing for the interpretation of geomorphological processes 

Marta Zocchi, Benedetta Antonielli, Roberta Marini, Claudia Masciulli, Gianmarco Pantozzi, Francesco Troiani, Paolo Mazzanti, and Gabriele Scarascia Mugnozza

A-DInSAR (Advanced Differential Synthetic Aperture Radar Interferometry) is widely acknowledged as one of the most powerful remote sensing tools for measuring Earth’s surface displacements over large areas, and in particular landslides. The Persistent Scatterer Interferometry (PS-InSAR or PSI) is a common A-DInSAR multitemporal technique, which allows retrieving displacement measurements with sub-centimetric precision. Characterization and interpretation of landslides can greatly benefit from the application of A-DInSAR post-processing tools, especially when extremely slow-moving phenomena are not detectable by classical geomorphological investigations, or when complex displacement patterns need to be highlighted. Detailed representations of the spatial and temporal evolution of the processes provide useful constraints during the planning stages of reconstructions and for land use purposes.
The present study is part of a broader national project, focused on updating and monitoring landslide-prone slopes interacting with urban centres in the Central Apennines (Italy), by using both geomorphological and A-DInSAR analysis. Therefore, although field surveys permitted the systematic updating of the available landslide inventories, in most cases, clear indications of displacement were outlined only by the SAR interferometry results. In this regard, the preliminary results of the ongoing research focus on specific post-processing analyses of interferometric data performed in the study area. 
A specific PS-toolbox software, developed by NHAZCA S.r.l. as a set of post-processing plugins for the open-source software QGIS, was specifically designed to enhance spatial and temporal deformation trends of the PSI results, as well as for visualizing the differences between multi-satellite datasets. Moreover, the PS-toolbox allowed depicting subtle surface patterns within the landslide area, shedding light on kinematics and style of activity of slope instabilities.  
In complex morphological conditions, as the Apennines mountainous regions, the geometric distortions and the site coverage percentage can lead to a lack of information. Therefore, we compared the coverage of PSs and the accuracy of the surface velocity maps produced using different InSAR tool packages on both Sentinel-1 and COSMO-SkyMed scenes. Thus, the comparison of the resulting datasets allowed their validation in terms of measured displacements and reliability for further processing.

How to cite: Zocchi, M., Antonielli, B., Marini, R., Masciulli, C., Pantozzi, G., Troiani, F., Mazzanti, P., and Scarascia Mugnozza, G.: The importance of InSAR data post-processing for the interpretation of geomorphological processes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9194, https://doi.org/10.5194/egusphere-egu22-9194, 2022.

EGU22-9443 | Presentations | NH6.5

Regional scale monitoring results of surface deformation in the Transcarpathian Region 

Balint Magyar and Roland Horvath

One of the main objectives of the GeoSES* project to investigate dangerous natural and anthropogenic geo-processes and aim hazard assessment using space geodetic technologies and concentrating on the Hungary-Slovakia-Romania-Ukraine cross-border region. The monitoring of such natural hazards and emergency situations (e.g. landslides and sinkholes ) are also additional objectives of the project. In the framework of the presented project, our study utilizes one of the fastest developing space-borne remote sensing technology, namely InSAR, which is an outstanding tool to conduct large scale ground deformation observation and monitoring. According this, we utilized ascending and descending Sentinel-1 Level-1 SLC acquisitions since 2014 until 2021 over the indicated cross-border area, focusing the Transcarpathian Region.

We also present an automated processing chain of Sentinel-1 interferometric wide mode acquisitions to generate long-term ground deformation time-series. The pre-processing part of the workflow includes the migration of the input data from the Alaska Satellite Facility (ASF), the integration of precise orbits from S1QC, as well as the corresponding radiometric calibration and mosaicing of the TOPS mode data, furtheromore the geocoding of the geometrical reference. Subsequently all slave acquisition have be co-registered to the geometrical reference using iterative intensity matching and spectral diversity methods, then subsequent deramping has been also performed. To retrieve deformation time series from co-registered SLCs stacks, we have implemented multi-reference Interferometric Point Target Analysis (IPTA) using singe-look and multi-look phases using the GAMMA Software. After forming differential interferometric point stacks, we conducted the iterative IPTA processing. According this both topographical and orbit-related phase component, as well as the atmospheric phase, height-dependent atmospheric phase and linear phase term supplemented with the deformation phase are modeled and refined through iterative steps. To retrieve recent deformations of the investigated area, SVD LSQ optimization has been utilized to transform the multi-reference stack to single-reference phase time-series such could be converted to LOS displacements within the processing chain. Involving both ascending and descending LOS solutions also supports the evaluation of quasi East-West and Up-Down components of the surface deformations. Results are interpreted both in regional scale and through local examples of the introduced cross-border region as well.

* Hungary-Slovakia-Romania-Ukraine (HU-SK-RO-UA) ENI Cross-border Cooperation Programme (2014-2020) “GeoSES” - Extension of the operational "Space Emergency System"

How to cite: Magyar, B. and Horvath, R.: Regional scale monitoring results of surface deformation in the Transcarpathian Region, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9443, https://doi.org/10.5194/egusphere-egu22-9443, 2022.

EGU22-9733 | Presentations | NH6.5

EGMS: a New Copernicus Service for Ground Motion Mapping and Monitoring 

Mario Costantini, Federico Minati, Francesco Trillo, Alessandro Ferretti, Emanuele Passera, Alessio Rucci, John Dehls, Yngvar Larsen, Petar Marinkovic, Michael Eineder, Ramon Brcic, Robert Siegmund, Paul Kotzerke, Ambrus Kenyeres, Sergio Proietti, Lorenzo Solari, and Henrik Andersen

Satellite interferometric SAR (InSAR) has demonstrated to be a powerful technology to perform millimeter-scale precision measurements of ground motions. The European Ground Motion Service (EGMS), funded by the European Commission as an essential element of the Copernicus Land Monitoring Service (CLMS), constitutes the first application of the InSAR technology to high-resolution monitoring of ground deformations over an entire continent, based on full-resolution processing of all Sentinel-1 (S1) satellite acquisitions over most of Europe (Copernicus Participating States).

Upscaling from existing national precursor services to pan-European scale is challenging. EGMS employs the most advanced persistent scatterer (PS) and distributed scatterer (DS) InSAR processing algorithms, and adequate techniques to ensure seamless harmonization between the Sentinel-1 tracks. Moreover, within EGMS, a Global Navigation Satellite System (GNSS) high-quality 50 km grid model is realized, in order to tie the InSAR products to the geodetic reference frame ETRF2014.

The millimeter-scale precision measurements of ground motions provided by EGMS will enable mapping and monitoring of landslides, subsidence and earthquake or volcanic phenomena all over Europe, and the stability of slopes, mining areas, buildings and infrastructures. The first release of EGMS products will be in March 2022, with annual updates to follow.

To foster as wide usage as possible, EGMS foresees tools for visualization, exploration, analysis and download of the ground deformation products, as well as elements to promote best practice applications and user uptake.

The new European geospatial dataset provided by EGMS will hopefully also stimulate the development of value-added products/services for the analysis and monitoring of ground motions and stability of structures based on InSAR measurements, as well as other InSAR products with higher spatial and/or temporal resolution.

This work will describe all the qualifying points of EGMS. Particular attention will be paid to the characteristics and the accuracy of the realized products, ensured in such a huge production by advanced algorithms and quality checks.

In addition, many examples of EGMS products will be shown to discuss the great potential and the (few) limitations of EGMS for mapping and monitoring landslides, subsidence and earthquake or volcanic phenomena, and the related stability of slopes, buildings and infrastructures.

How to cite: Costantini, M., Minati, F., Trillo, F., Ferretti, A., Passera, E., Rucci, A., Dehls, J., Larsen, Y., Marinkovic, P., Eineder, M., Brcic, R., Siegmund, R., Kotzerke, P., Kenyeres, A., Proietti, S., Solari, L., and Andersen, H.: EGMS: a New Copernicus Service for Ground Motion Mapping and Monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9733, https://doi.org/10.5194/egusphere-egu22-9733, 2022.

EGU22-10347 | Presentations | NH6.5

Monitoring mining-induced ground deformation in Karagandy mining basin using InSAR 

Gauhar Meldebekova, Chen Yu, Jon Mills, and Zhenhong Li

Strata deformation associated with underground longwall coal mining can induce large magnitudes of ground surface subsidence. The Karagandy basin, one of the largest coal mining regions in Kazakhstan, is located in close proximity to urban areas and critical infrastructure, necessitating detailed investigation into the spatial distribution and temporal dynamics of subsidence. Synthetic aperture radar interferometry (InSAR) is recognised as a powerful tool to detect, map and quantify ground deformation. In this research, C-band Sentinel-1 products were used to implement interferometric and time-series analysis using the Small BAseline Subset (SBAS) algorithm. Subsidence bowls were detected over eight mining sites. The maximum annual velocity along line-of-sight, some ‑82 mm/year,  was detected at the Kostenko mine, whilst cumulative subsidence reached a maximum of 350 mm in five years.  Wavelet transform analysis was used to inspect the non-linear nature of the signal and confirmed the annual periodicity of ground deformation. Spatio-temporal analysis of subsidence patterns revealed the different drivers of deformation, with sites clustered accordingly. Results from the research offer considerable insight for facilitating decision-making in forward sustainable mining operations, both in Kazakhstan and further afield.

How to cite: Meldebekova, G., Yu, C., Mills, J., and Li, Z.: Monitoring mining-induced ground deformation in Karagandy mining basin using InSAR, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10347, https://doi.org/10.5194/egusphere-egu22-10347, 2022.

EGU22-11186 | Presentations | NH6.5

Values and challenges of DInSAR derived velocity estimates for landslide hazard assessment and mapping 

Mylene Jacquemart and Andrea Manconi

Deep-seated slope instabilities pose a significant hazard to infrastructure and livelihoods in mountain regions all around the world. Increasingly accesible data from synthetic aperture radar (SAR) satellites, such as ESA’ Copernicus Sentinel-1 mission, offer easier access to displacement data that can be used to detect, delineate, and monitor landslides in mountainous terrain. However, displacement measurements retrieved from differential interferometric processing (DInSAR) can be biased by the terrain geometry, which can lead to an underestimation of the true displacement. In addition, the quality of DInSAR results is highly susceptible to changes of surface geometry and moisture conditions, for example due to snow melt, hillslope erosion, or vegetation changes. Furthermore, the relative nature of DInSAR measurements can lead to underestimation of displacements due to phase aliasing. These factors may severely impact the accuracy of landslide velocities quantification. However, landslide velocities are often directly used in hazard assessment.

 

In Switzerland, mean and maximum landslide velocities are key factors used to assess the hazard intensity of unstable slopes, and thus to determine the slope hazard potential and consequently hazard zonation. The latter has direct implications for land use and land-use planning. In this study, we use two exemplary large deep-seated instabilities at Brienzauls (canton of Grisons) and Spitzer Stein (canton of Bern), both in Switzerland, to showcase the challenges of relying on DInSAR derived velocities for hazard mapping. We attempt to disentangle effects of terrain and orbit geometry on the measurable velocities from those caused by transient changes to surface geometry and conditions, and explore ways by which the value of DInSAR-derived displacement measurements can nevertheless be maximized for hazard zonation mapping. 

How to cite: Jacquemart, M. and Manconi, A.: Values and challenges of DInSAR derived velocity estimates for landslide hazard assessment and mapping, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11186, https://doi.org/10.5194/egusphere-egu22-11186, 2022.

EGU22-11733 | Presentations | NH6.5 | Highlight

Operational monitoring of our hazardous planet with Sentinel-1 

Tim Wright, Andy Hooper, Milan Lazecky, Yasser Maghsoudi, Karsten Spaans, and Tom Ingleby

The European Commission’s Sentinel-1 constellation, operated by ESA, has been a game changer for operational monitoring of our hazardous planet. When fully operational, the Sentinel-1 mission is a two-satellite constellation; currently consisting of Sentinel-1A (launched in 2014) and Sentinel-1B (launched in 2016), the mission provides at least one SAR image for the whole land surface every 12 days, with both ascending and descending data acquired in tectonic/volcanic areas globally every 12 days, and images acquired in both geometries every 6 days over all of Europe. The narrow orbital tube, consistent imaging geometry, and long time series are optimised for ground deformation measurements with InSAR. Sentinel-1C and -1D have been built and will replace the existing satellites in due course. Perhaps the most important game changer has been the Copernicus data policy, which mandates fully free and open distribution of Sentinel-1 products for all applications, whether they are for research or commercial purposes. Sentinel-1 InSAR data has quickly become the primary data set for monitoring ground movement in our hazardous planet. Several research organisations/collaborations now process enormous quantities of Sentinel-1 data to produce deformation products that are made freely available through organisations like COMET in the UK, EPOS and the new European Ground Motion Service in Europe, and the Alaska SAR Facility in the US. Commercial providers are processing data at scales ranging from individual bridges/dams through to whole countries. In this presentation we will focus on Sentinel-1 results produced academically by COMET and commercially by SatSense Ltd. COMET now responds routinely to all continental earthquakes bigger than M5.5 and provides interactive tools and machine-learning-based alerting for global volcanoes. COMET is combining Sentinel-1 InSAR with GNSS to map tectonic strain at high spatial resolution on a continental scale, in areas including Anatolia, Tibet and Iran, and using the results to improve our understanding of seismic hazard. SatSense have demonstrated the value of Sentinel-1 InSAR for applications including dam monitoring, water pipe failures and railway infrastructure. The SatSense processing approach allows InSAR ground movement data to be kept continuously up to date for entire countries. We conclude the presentation by discussing prospects for the future of InSAR beyond Sentinel-1.

How to cite: Wright, T., Hooper, A., Lazecky, M., Maghsoudi, Y., Spaans, K., and Ingleby, T.: Operational monitoring of our hazardous planet with Sentinel-1, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11733, https://doi.org/10.5194/egusphere-egu22-11733, 2022.

EGU22-11969 | Presentations | NH6.5

Combined DInSAR-PSInSAR approach for increasing the quality of deformation map estimation in the area of underground mining exploitation 

Natalia Wielgocka, Kamila Pawluszek-Filipiak, and Maya Ilieva

Monitoring the deformation of the mining ground surface is crucial to ensuring the safety of residents, workers and the protection of all infrastructure in mining areas.The Polish realization of the European Plate Observing System project (EPOS-PL and its continuation EPOS-PL+) aims to build an infrastructure to monitor the deformation of the ground surface caused by extensive underground mining activities in the area of Upper Silesian Coal Basin in Southern Poland.  Among many geodetic and geophysical approaches for monitoring, two different Interferometric Synthetic Aperture Radar (InSAR) techniques have been applied, also taking the advantage of the big set of freely available and with shorter revisiting time (6 days) Sentinel-1 satellite data. In the current study the Differential InSAR (DInSAR) and the Persistent Scattered Interferometry (PSInSAR) approaches are compared, evaluated and integrated. Various processing strategies are tested aiming to increase the quality of the produced integrated deformation maps. The optimal processing strategy should accurately detect stable areas, estimate the small deformation, as well as the maximum deformation gradient occurring in the center of the subsidence bowl directly in the excavation area. 

One of the main error contributors to the Sentinel-1 data is the water vapor in the atmosphere that might slower the radar signal and modulate the results. Thus, the atmospheric artefacts have to be minimized since they are one of the main effects that limits the accuracy of interferometric products. In the PSInSAR approach high-pass and low-pass filtering has been used, while in the DInSAR approach – estimation of the Atmospheric Phase Screen has been made based on polynomial surface estimation using stable coherent points. Comparison of the one-year cumulated deformation for the area of Rydułtowy mine in Poland with ground truth data such as static GNSS measurement on reference points shows that PSInSAR results are more accurate. However, due to the linear deformation model required in the PSInSAR processing the areas in the center of the subsidence bowls were not estimated. Therefore, the difference between PSInSAR and DInSAR results was used for the refinement of the DInSAR deformation map. This refinement was made based on various statistical approaches (e.g. polynomial interpolation, kriging, inverse distance weighted-IDW). The results of IDW and kriging shows the best performance and allowed to minimize errors associated with DInSAR approach and provide a more accurate deformation map in the area of mining as well as provided the opportunity to capture maximal deformation gradient. 

How to cite: Wielgocka, N., Pawluszek-Filipiak, K., and Ilieva, M.: Combined DInSAR-PSInSAR approach for increasing the quality of deformation map estimation in the area of underground mining exploitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11969, https://doi.org/10.5194/egusphere-egu22-11969, 2022.

EGU22-12444 | Presentations | NH6.5

New advances of the P-SBAS algorithm for the efficient generation of full resolution DInSAR products through scalable HPC infrastructures 

Riccardo Lanari, Manuela Bonano, Sabatino Buonanno, Michele Manunta, Pasquale Striano, Muhammad Yasir, and Ivana Zinno

The widespread availability of large SAR data volumes systematically acquired during the last 3 decades by several space-borne sensors, operating with different spatial resolutions, footprint extensions, revisit times and bandwidths (typically X-, C-, or L-band), has promoted the development of advanced Differential Interferometric SAR (DInSAR) techniques providing displacement time series relevant to wide areas with rather limited costs. These techniques allow us to carry out detailed analyses of the Earth surface deformation effects caused by various natural and anthropic phenomena and also to investigate the displacements affecting man-made structures. In particular, with reference to the latter issue, the increasing need to assess, preserve and mitigate the health conditions of buildings and infrastructures, due to the high vulnerability of the built-up environment, has fostered over the last decades an intense exploitation of the advanced DInSAR techniques. In this context, a new frontier for the development of these methodologies is related to their effective exploitation in operational contexts, requiring the use of up-to-date interferometric processing techniques and advanced HPC infrastructures to precisely and efficiently generate value-added information from the available, multi-temporal large SAR data stacks.

Among several advanced DInSAR algorithms, a widely used approach is the Small BAseline Subset (SBAS) technique which has largely demonstrated its effectiveness to retrieve deformations relevant to natural and anthropic hazard scenarios, through the generation of spatially dense mean velocity maps and displacement time series with millimetric accuracy, at different spatial resolution scales (both regional and local ones). Moreover, a parallel algorithmic solution for the SBAS approach, referred to as the parallel Small BAseline Subset (P-SBAS) technique, has been recently developed.

In this work, we present some new advances of the full resolution P-SBAS DInSAR processing chain that allow us to effectively retrieve, in reasonable time frames (less than 24 hours), the spatial and temporal patterns of the deformation signals associated to the built-up heritage. This is achieved through a dedicated implementation of the full resolution P-SBAS processing chain permitting to efficiently exploit HPC resources, also accessible through Cloud Computing environments. In particular, we make an extensive use of innovative hardware and software parallel solutions based on GPUs, which are able to efficiently store, retrieve and process huge amounts of full resolution DInSAR products, with high scalability performance.

To demonstrate the capability of the implemented solution we show the results of the massive full resolution P-SBAS processing relevant to several urban areas of the Italian territory. This is done by exploiting the overall, full frame SAR image stacks of ascending and descending X-band SAR data acquired by the sensors of the Italian COSMO-SkyMed (CSK) constellation, operated through the Stripmap mode (with about 3m x 3 m spatial resolution), and those of the C-band Sentinel-1 twin sensors of the Copernicus Programme, exploiting the Interferometric Wide Swath TOPS mode (with about 15 m x 4 m spatial resolution). Moreover, we also benefit from the availability of the first data acquired by the second generation COSMO-SkyMed constellation (CSG), which allows continuity with the CSK data in the monitoring of the detected deformation phenomena.

How to cite: Lanari, R., Bonano, M., Buonanno, S., Manunta, M., Striano, P., Yasir, M., and Zinno, I.: New advances of the P-SBAS algorithm for the efficient generation of full resolution DInSAR products through scalable HPC infrastructures, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12444, https://doi.org/10.5194/egusphere-egu22-12444, 2022.

EGU22-12583 | Presentations | NH6.5

InSAR application for the detection of precursors on the Achoma landslide, Peru 

Benedetta Dini, Pascal Lacroix, and Marie-Pierre Doin

In the last few decades, InSAR has been used to identify ground deformation related to slope instability and to retrieve time series of landslide displacements. In some cases, retrospective retrieval of time series revealed acceleration patterns precursory to failure. This suggests that, the higher temporal sampling of new generation satellites, may indeed offer the opportunity to detect motion precursory to failure with viable lead time.

However, the possibility to retrieve continuous time series over landslides is often impaired by factors such as unfavourable orientation or landcover and fast movements, which make phase unwrapping difficult if not, in certain cases, impossible.

One way to retrieve precursors of destabilisation for landslides that present characteristics unfavourable to unwrapping and to time series inversion is to analyse in detail changes in successive interferograms in the phase domain in combination with interferometric coherence.  

We generated and analysed 102 Sentinel-1 interferograms, covering the period between April 2015 and February 2021, at high spatial resolution (8 and 2 looks in range and azimuth respectively) over the Achoma landslide in the Colca valley, Peru. This large, deep-seated landslide, covering an area of about 40 hectares, previously unidentified, failed on 18th June 2020, damming the Rio Colca and giving origin to a lake.

We developed a method to analyse the changes through time of the unwrapped phase difference between a stable point and points within the landslide. In combination with this, we investigated patterns of coherence loss both within the landslide and in the surrounding area.

We observed that, in the weeks prior to the landslide, there was an increase of the phase difference between a stable reference and points within the landslide, indicating an acceleration of the downslope displacements. In addition to that, seasonal coherence loss is seen both within the landslide and in the surrounding area, in correspondence with wet periods. However, we observed also significant, local coherence loss outlining the scarp and the southeastern flank of the landslide, intermittently in the years before failure, in periods in which coherence was overall higher. Moreover, we observe a sharp decrease in the ratio between the coherence within the landslide and in the surrounding area, roughly six months before the failure.

This type of approach is promising with respect to the extraction of relevant information from interferometric data when the generation of accurate and continuous time series of displacements is hindered by the nature of landcover or of the landslide studied, such in the case of the Achoma landslide. The combination of key, relevant parameters and their changes through time obtained with this methodology may prove necessary for the identification of precursors over a wider range of landslides than with time series generation alone.

 

How to cite: Dini, B., Lacroix, P., and Doin, M.-P.: InSAR application for the detection of precursors on the Achoma landslide, Peru, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12583, https://doi.org/10.5194/egusphere-egu22-12583, 2022.

EGU22-12646 | Presentations | NH6.5

Assessment of Land Subsidence Hazard, Vulnerability and Risk: A case study for National Capital Region in India 

Shagun Garg, Mahdi Motagh, Indu Jayaluxmi, Vamshi Karanam, Sivasakthy Selvakumaran, and Andrea Marinoni

Risk assessment and zoning are very important to risk management as it indicates how severe the hazard can be, and who would be most affected. It plays a crucial role in risk management, especially for densely populated areas. 

Delhi- the capital of India, is the fifth most populous city in the world, with a population density of nearly 30,000 people per square mile. Like other global megacities, Delhi is also facing a looming water crisis due to urbanization and rapid population expansion. The increasing demand for water has translated into the extraction of larger quantities of groundwater in the region. One of the many consequences of groundwater over-extraction is land subsidence. Amongst all other ways to monitor land subsidence, Interferometric Synthetic Aperture Radar (InSAR) is considered to be the most effective and widely used technique.  We used the InSAR technique and analyzed Sentinel-1 data acquired during 2014 - 2020 and identified some localized subsidence zones in the region. In addition to that,  a risk assessment was also performed by considering hazards and vulnerability approach.

In this study, a land subsidence risk assessment index was proposed based on the Disaster Risk Index. The cumulative subsidence volume, the land subsidence velocity, subsidence gradient, and the groundwater exploitation intensity were collected, analyzed, and put together to create a land subsidence hazard evaluation map in the National capital region India. The population density, land cover, and population estimates were adopted as indexes to create the vulnerability map. Finally, the land subsidence risk map was created by combining the hazard and vulnerability maps using the matrix multiplication approach. Specifically, the final risk map was classified into three levels, i.e., high, medium, and low. The analysis highlights an approximate area of 100 square kilometers to be subjected to the highest risk level of land subsidence, demanding urgent attention. The findings of this study are highly relevant for government agencies to formulate new policies against the over-exploitation of groundwater and to facilitate a sustainable and resilient groundwater management system in Delhi NCR.

How to cite: Garg, S., Motagh, M., Jayaluxmi, I., Karanam, V., Selvakumaran, S., and Marinoni, A.: Assessment of Land Subsidence Hazard, Vulnerability and Risk: A case study for National Capital Region in India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12646, https://doi.org/10.5194/egusphere-egu22-12646, 2022.

EGU22-12898 | Presentations | NH6.5

Subsidence in Askja Caldera between 2015 to 2021 

Josefa Sepúlveda, Andy Hooper, Susanna Ebmeier, and Camila Novoa

Iceland is in a Mid Ocean Ridge, where the North American Plate is moving far away from Eurasian Plate at a relative rate of 18-19 mm/yr. The boundary between plates is marked by an active neovolcanism expressed by different volcanoes centres and fissures swarms. Askja volcano is located in the North Volcanic Zone of Iceland. It has an area of 45 km3 and hosts three calderas. Three main eruptions have been observed during different periods: i) 1874 to 1876, ii) 1921-1929, and iii) 1961. Monitoring data have shown a period of alternance between subsidence and uplift between 1966 to 1972. Thereafter, since at least 1983 the caldera has been subsiding at a rate of 5 cm/yr, but this rate has been decaying slowing with time. Additionally, tomography data has revealed a possible deeper zone (between 9 and 15 km depth) below the volcano where melting is storage and also the seismicity between 20 and 25 km depth may be interpreted like a magma movement in this area. However, there are still questions about what is producing the subsidence at Askja. In this work, we present Interferometry Synthetic Aperture Radar (InSAR) results during the period of 2015 to 2021 in Askja. This data will help to constraint what is it causing the subsidence at Askja Caldera.

How to cite: Sepúlveda, J., Hooper, A., Ebmeier, S., and Novoa, C.: Subsidence in Askja Caldera between 2015 to 2021, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12898, https://doi.org/10.5194/egusphere-egu22-12898, 2022.

EGU22-881 | Presentations | NH6.3

Performance testing of optical flow time series analyses based on a fast, high–alpine landslide 

Doris Hermle, Michele Gaeta, Michael Krautblatter, Paolo Mazzanti, and Markus Keuschnig

Accurate remote analyses of high–alpine landslides are a key requirement for future alpine safety. In critical stages of alpine landslides, UAV (unmanned aerial vehicle) data can be employed, using image registration techniques to derive ground motion with high temporal and spatial resolution. Nevertheless, the classical area–based algorithms, dynamic surface alterations, and limited velocity range restrict detection, which results in noise from decorrelation, preventing their application to fast and complex landslides.

Here for the first time to our knowledge, we apply optical flow time series to analyse one of the fastest and most critical debris flow source zones in Austria. The benchmark site Sattelkar (2’130-2’730 m asl), a steep, high–alpine cirque in Austria, is highly sensitive to rainfall and melt–water events, which led to a 70.000 m³ debris slide event in July 2014. We use a UAV data set (0.16 m) collected over three years (five acquisitions, 2018-2020). Our novel approach is to employ optical flow, which, along with phase correlation, is incorporated into the software IRIS. To test the performance, we compared the two algorithms by applying them to image stacks to calculate time–series displacement curves and ground motion maps. These maps enable us to precisely identify compartments of the complex landslide body and reveal different displacement patterns, with displacement curves reflecting an increased acceleration. Traceable boulders in the UAS orthophotos independently validate the methodology applied. We demonstrate that UAV optical flow time series analysis generates a better signal extraction and a wider observable velocity range, highlighting how it can be applied to a fast, high–alpine landslide.

How to cite: Hermle, D., Gaeta, M., Krautblatter, M., Mazzanti, P., and Keuschnig, M.: Performance testing of optical flow time series analyses based on a fast, high–alpine landslide, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-881, https://doi.org/10.5194/egusphere-egu22-881, 2022.

EGU22-1013 | Presentations | NH6.3

Exploring knowledge-based and data-driven approaches to map earthflow and gully erosion features in New Zealand 

Daniel Hölbling, Lorena Abad, Raphael Spiekermann, Hugh Smith, Andrew Neverman, and Harley Betts

In New Zealand, earthflows and gullies are - next to shallow landslides - important erosion processes and sediment sources in hill country areas. They can cause damage to infrastructure, affect the productivity of farmland, and impact water quality due to fine sediment input to streams. Implementing effective erosion mitigation measures requires detailed information on the location, extent, and spatial distribution of these features over large areas. Remote sensing provides an excellent opportunity to gain such knowledge, whereby different approaches can be applied. In this study, we present two approaches for detecting earthflow and gully erosion features on the North Island of New Zealand.

Earthflows are complex mass movement features that can occur on gentle to moderate slopes in plastic, mixed, and disturbed earth with significant internal deformation, whereby vegetation cover usually remains on the earthflow bodies during movement. High-resolution aerial photography and a LiDAR digital elevation model (DEM), including a range of derived products such as slope, surface roughness, terrain wetness index, were used within a knowledge-based object-based image analysis (OBIA) workflow to semi-automatically map potential earthflows. Specific earthflow characteristics discernible from the optical imagery, such as the presence of bare ground at the toe and rushes, were identified on different hierarchical segmentation levels and subsequently aggregated. Additionally, morphological and contextual properties (e.g. connection to streams) were integrated into the mapping workflow. Gully erosion is an indicator of land degradation, which occurs due to the removal of soil along drainage channels through surface water runoff. We tested a region-based convolutional neural network (Mask-RCNN) deep learning approach for object detection to map gully features. The deep learning was performed on three LiDAR DEM terrain derivatives, namely, slope length and steepness (LS) factor, hillshade and terrain ruggedness index. Labelled chips for training data were generated with reference gully features mapped manually on historical aerial photography.

Semi-automated earthflow detection appeared to be very challenging due to their complexity and the lack of distinct characteristics to differentiate them from other features. The initial results suggest the knowledge-based OBIA workflow has potential, but a major challenge is the creation of objects that represent one earthflow. Hence, the current mapping results may better indicate terrain susceptible to potential earthflow occurrence rather than correctly detecting single earthflows. As for gully mapping, the data-driven deep learning framework shows promising results regarding gully presence and absence. Validation resulted in detected gullies overlapping 60% of the reference gully area. However, a limiting factor related to the available reference data that was mapped on historical aerial photography and does not align with the LiDAR DEM. Given the significant impact of earthflows and gullies, it is essential to develop reliable and targeted analysis methods to better understand their spatial occurrence and enable improved representation of these erosion processes in catchment sediment budget models.

How to cite: Hölbling, D., Abad, L., Spiekermann, R., Smith, H., Neverman, A., and Betts, H.: Exploring knowledge-based and data-driven approaches to map earthflow and gully erosion features in New Zealand, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1013, https://doi.org/10.5194/egusphere-egu22-1013, 2022.

EGU22-1139 | Presentations | NH6.3

Large deformation field from InSAR during 2015 to 2021 for the Makran subduction and North Tibet 

Xiaoran Lv, Falk Amelung, Yun Shao, and Xiaoyong Wu

We have calculated the deformation velocity field for the Makran subduction and North Tibet region with the spatial range of [25°N - 31°N; 55°E-67°E] and [30N°-41°N; 85°E-97°E], respectively. There are two significant deformation signals in the epicenter of the 2013 Mw 7.7 Balochistan earthquake and the 2001 Mw 7.8 Kokoxili earthquake. For the Balochistan earthquake, we found that the 7-year post-seismic deformation was due to the widespread aseismic slip along the megathrust and not due to the viscoelastic relaxation. For the Kokoxili earthquake, we probed whether the viscoelastic relaxation of 2001 Kokoxili earthquake is still continuing. We first simulate the deformation caused by the interseismic slip along the major active faults in Tibet. By comparing the simulated deformation and the observed deformation, we found the maximum ratio of the simulated deformation to the observation is 42%, which means that the viscoelastic relaxation of 2001 Kokoxili earthquake is still continuing. The effective viscosities of lower crust and upper mantle are inverted as 1.78*1019Pas and 1.78 * 1020Pas, respectively.

How to cite: Lv, X., Amelung, F., Shao, Y., and Wu, X.: Large deformation field from InSAR during 2015 to 2021 for the Makran subduction and North Tibet, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1139, https://doi.org/10.5194/egusphere-egu22-1139, 2022.

EGU22-1173 | Presentations | NH6.3

InSAR measurements of ground deformations at Ischia island (Naples, Italy) along two decades dataset 

Lisa Beccaro, Cristiano Tolomei, Claudia Spinetti, Marina Bisson, Laura Colini, Riccardo De Ritis, and Roberto Gianardi

Ground deformation at volcanic areas is mainly driven by the interaction between lithology, morphology, seismology and volcanism. In the latest decades, radar interferometry has contributed to understand the volcanic dynamics through the measurement of ground deformations. This work focuses on the displacement analysis at Ischia, an active volcanic island located at the north-western end of the Gulf of Naples and characterized by a long eruptive and seismic history. The central portion of the island is dominated by Mt. Epomeo, a volcano-tectonic horst formed by caldera resurgence, tilted southward and bordered by a system of faults and fractures which represent the preferred degassing pathway of the hydrothermal system beneath the island. Seismicity is mainly concentrated in the northern area and the most recent and severe seismic sequence started with the Mw 3.9 earthquake on August 21st 2017 producing several damages and also victims. In this study, the investigation of surface displacement was carried out over a continuous time interval of about 17 years by using Synthetic Aperture Radar (SAR) dataset with different temporal and spatial resolutions. The Small Baseline Subset interferometric technique was applied to the dataset allowing the identification of the areas more potentially prone to trigger slope instability phenomena. The resulting ground displacement maps identified the highest deformations along the north-western, western and southern slopes of Mt. Epomeo and were validated by using GPS data acquired by local geodetic network. Mean velocity maps obtained from C-band Envisat and Sentinel-1 and X-band COSMO-SkyMed SAR data will be presented together with the ground deformation effects caused by the 2017 seismic swarm.

How to cite: Beccaro, L., Tolomei, C., Spinetti, C., Bisson, M., Colini, L., De Ritis, R., and Gianardi, R.: InSAR measurements of ground deformations at Ischia island (Naples, Italy) along two decades dataset, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1173, https://doi.org/10.5194/egusphere-egu22-1173, 2022.

EGU22-1329 | Presentations | NH6.3 | Highlight

Investigation of the Magnetospheric–Ionospheric–Lithospheric Coupling on occasion of the 14 August 2021 Haitian Earthquake 

Giulia D'Angelo, Mirko Piersanti, Roberto Battiston, Igor Bertello, Antonio Cicone, Piero Diego, Francesco Follega, Roberto Iuppa, Coralie Neubuser, Emanuele Papini, Alexandra Parmentier, Dario Recchiuti, and Pietro Ubertini

In the last few decades, the effort of the scientific community to clarify the issue of short-term forecasting of earthquakes has grown fast also thanks to the increasing number of data coming from networks of ground stations and satellites. This led to the discovery of several atmospheric and ionospheric anomalies statistically related to seismic activity, such as ionospheric plasma density perturbations and/or atmospheric temperature and pressure changes. With the aim to contribute in the understanding of the physical mechanisms behind the coupling between the lithosphere, lower atmosphere, ionosphere and magnetosphere during an earthquake, this paper presents a multi-instrumental analysis of a low latitude seismic event (Mw = 7.2), occurred in the Caribbean region on 14 August 2021. The earthquake happened during both super solar quiet and fair weather conditions, representing an optimal case study to the attempt of reconstructing the seismic scenario in terms of the link between lithosphere, atmosphere, ionosphere and magnetosphere. The proposed reconstruction based on ground and satellites high quality observations, suggests that the fault break generated an atmospheric gravity wave able to perturb mechanically the ionospheric plasma density, which, in turn, drove the generation of both electromagnetic waves and magnetospheric field line resonance frequency variation. The comparison between observations and the recent analytical Magnetospheric Ionospheric Lithospheric Coupling (M.I.L.C.) model confirms the activation of the lithosphere–atmosphere–ionosphere–magnetosphere chain. In addition, the observations of the China Seismo-Electromagnetic Satellite (CSES-01), which was flying over the epicentre some hours before the earthquake, confirms both the presence of electromagnetic wave activity coming from the lower ionosphere and plasma density variation consistent with the anomaly distribution of plasma density detected at ground by a chain of Global Navigation Satellite System stations located around the epicentre.

How to cite: D'Angelo, G., Piersanti, M., Battiston, R., Bertello, I., Cicone, A., Diego, P., Follega, F., Iuppa, R., Neubuser, C., Papini, E., Parmentier, A., Recchiuti, D., and Ubertini, P.: Investigation of the Magnetospheric–Ionospheric–Lithospheric Coupling on occasion of the 14 August 2021 Haitian Earthquake, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1329, https://doi.org/10.5194/egusphere-egu22-1329, 2022.

The present work aims at unveiling possible precursory signals of the devastating fire in North Evia island, during August 2021, that destroyed approximately 400 km2 of forest and cultivated land. Therefore, the time series of two environmental parameters known to be related to wild fire occurrence, i.e. soil moisture and Normalized Difference Vegetation Index (NDVI) were extracted and analyzed. Soil moisture in the top soil layer from 0 to 7cm was extracted from the ERA5-Land Monthly Averaged - ECMWF Climate Reanalysis data set at a spatial resolution of 9 km. The time series of remotely sensed NDVI was accessed through the Landsat 8 mission, at a spatial resolution of 30m, with a 32-day time step. Both time series covered the period from January 2015 to October 2021. Results indicated two specific patterns in the examined time series. Soil moisture time series in the affected areas demonstrated a shard declining trend since 2018, reaching its lowest value just prior the fire events in North Evia. The NDVI time series did not show any distinctive trend during the examined period in the affected sites, however comparing it to surrounding unaffected areas with the same extent, occupied from the same land cover types, an alarming finding was revealed; the NDVI time series in the affected sites demonstrated statistically significant lower variability compared to unaffected ones. This difference corresponds to a more homogeneous vegetation and possible absence of fire breaks in the burned areas compared to the ones that were not affected. Findings of the present work may help in highlighting areas with specific characteristics related to soil moisture and NDVI, that indicate a high risk of fire occurrence.

How to cite: Gemitzi, A. and Koutsias, N.: Possible precursory indicators for the devastating fire in North Evia island during August 2021, using remotely sensed and Earth-observation data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2444, https://doi.org/10.5194/egusphere-egu22-2444, 2022.

Landslide susceptibility mapping of Chitral, northwestern Pakistan using GIS

Mukhtar S. Ahmad1, *, Mona Lisa1, Saad Khan2 Munawar Shah3

1Department of Earth Sciences, Quaid-I-Azam University, Islamabad 45320, Pakistan

2Bacha Khan University Charsadda, Pakistan

3Department of Space Science, Institute of Space Technology, 44000 Islamabad, Pakistan

1mukhtargeo44@gmail.com

1lisa_qau@yahoo.com

2saadkhan@bkuc.edu.pk

3shahmunawar1@gmail.com

*Corresponding author: mukhtargeo44@gmail.com

Abstract

Landslides are the most frequently occurring geohazard in rugged Himalayan mountainous terrains. They often cause significant loss to life and property, and therefore landslide susceptibility mapping (LSM) has become increasingly urgent and important. In this study, LSM is carried out in the Chitral district of the Hindukush region in northwestern Pakistan. Several Geographic Information System (GIS) based models (such as Analytical Hierarchy Process (AHP), weighted overlay) has been used to build landslide inventory and susceptibility maps. The study incorporated nine main factors (including human-induced parameters, such as distance from road; topographical parameters, such as slope, aspect, and landcover; geological parameters, such as lithology, distance to fault, seismicity; hydrological parameters, such as rainfall and distance to stream) to generate LSM, further classified in five classes, very high susceptibility zone, high, moderate, low, and very low susceptible zone. It is concluded that most of the landslides in the study area are the result of steep slopes of mountains, followed by precipitation and earthquake. Landslide in the form of rockfall is mostly due to the active seismicity of the Hindukush region. The predicted susceptible zones of landslide in the study area are in good agreement with the past landslide localities, which is an indication of the susceptibility mapping of landslides in the region.

How to cite: Ahmad, S. M.: Landslide susceptibility mapping of Chitral, northwestern Pakistan using GIS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3853, https://doi.org/10.5194/egusphere-egu22-3853, 2022.

EGU22-4082 | Presentations | NH6.3 | Highlight

Timing landslide and flash flood events from radar satellite 

Axel Deijns, Olivier Dewitte, Wim Thiery, Nicolas d'Oreye, Jean-Philippe Malet, and François Kervyn

Landslides and flash floods are geomorphic hazards (hereafter called GH) that often co-occur and interact. Such events generally occur very quickly, leading to catastrophic impacts. In this study we focus on the accurate estimation of the timing of GH events using satellite Synthetic Aperture Radar (SAR) remote sensing. More specifically, we focus on a tropical region, i.e. environments that are frequently cloud-covered and where space-based accurate characterization of the timing of GH events at a regional scale can only be achieved through the use of SAR given its cloud penetrating capabilities. In our multi-temporal change analysis method we investigated amplitude, spatial amplitude correlation and coherence time series of four recent large GH events of several hundreds of occurrences each covering various terrain conditions and containing combinations of landslides and flash floods within the western branch of the East African Rift located in tropical Africa. We identified changes that could be attributed to the occurrence of the GH events within the SAR time series and estimated GH even timing from it. We compared the SAR time series with vegetation and rainfall time series to better understand the environmental influence imposed by the variying terrain conditions. The Copernicus Sentinel 1 satellite is the key product used, which next to being open access, offers a dense, high resolution time series within our study area. The results show that SAR can provide valuable information for GH event timing detection. The most accurate GH event timing estimations were achieved using the coherence time series ranging from one day to a 1,5 month difference from the GH event occurrence, followed by the spatial amplitude correlation time series with one day to a 2,5 month difference. Amplitude time series were highly influenced by seasonality and proved to be insufficient for accurate GH event timing estimation. The results provide additional insight into the influence of seasonal vegetation and rainfall patterns for varying landscape conditions on the SAR time series. This research is one of the first to show the capabilities of SAR to constrain the timing of GH events with an accuracy much higher than what can be obtained from optical imagery in cloud-covered environments. These methodological results have the potential to be implemented in cloud-based computing platforms to help improve GH event detection tools at regional scales, and help to establish unprecedented GH event inventories in changing environments such as the East African Rift.

How to cite: Deijns, A., Dewitte, O., Thiery, W., d'Oreye, N., Malet, J.-P., and Kervyn, F.: Timing landslide and flash flood events from radar satellite, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4082, https://doi.org/10.5194/egusphere-egu22-4082, 2022.

EGU22-4849 | Presentations | NH6.3 | Highlight

Extending the integrated monitoring of deep-seated landslide activity into the past using free and open-source photogrammetry 

Johannes Branke, Thomas Zieher, Jan Pfeiffer, Magnus Bremer, Martin Rutzinger, Bernhard Gems, Margreth Keiler, and Barbara Schneider-Muntau

Deep-seated gravitational slope deformations (DSGSDs) pose serious threats to buildings and infrastructure in mountain regions. The understanding of past movement behavior are essential requirements for enhancing process knowledge and potential mitigation measures. In this context historical aerial imagery provides a unique possibility to assess and reconstruct the deformation history of DSGSDs. This study investigates the feasibility of 3D point clouds derived from historical aerial imagery using free and open-source (FOSS) photogrammetric tools for analyzing the long-term behavior of the Reissenschuh DSGSD in the Schmirn valley (Tyrol, Austria) and assessing related secondary processes as changes in creep velocity, rockfall or debris flows. For the photogrammetric analyses, scanned analogue and digital imagery of six acquisition flights, conducted in 1954, 1971/1973, 2007, 2010, and 2019, have been processed using the FOSS photogrammetric suite MicMac. Further point cloud processing was carried out in CloudCompare. An improved version of the image correlation approach (IMCORR) implemented in SAGA GIS was used for the area-wide assessment of slope deformation. For the georeferencing and scaling an airborne laser scanning (ALS) point cloud of 2008 provided by the Federal State of Tyrol (Austria) was used. In total five photogrammetric 3D point clouds covering the period from 1954 to 2019 were derived and analyzed in terms of displacement, velocity and acceleration. The accuracy assessment with computed Multiscale Model to Model Cloud Comparison (M3C2) distances between photogrammetric 3D point clouds and reference ALS 3D point cloud, showed an overall uncertainty of about ±1.2 m (95% quantile) for all 3D point clouds produced with scanned analogue aerial images (1954, 1971/1973 and 2007), whereas 3D point clouds produced with digital aerial imagery (2010, 2019) showed a distinctly lower uncertainty of about ±0.3 m (95% quantile). Also, digital elevation models (DEM) of difference (DoD) for each epoch were calculated. IMCORR and DoD results indicate significant displacements up to 40 meters in 65 years for the central part of the landslide. The historical datasets further indicate a change of spatio-temporal patterns of movement rates and a minor but overall acceleration of the landslide. The main challenges were the (i) gaps in the 3D point clouds on areas of steep, shadowed slopes and high vegetation, (ii) ground filtering on the photogrammetric point clouds for accurate calculation of digital terrain models (DTMs) and (iii) the quality of the scanned aerial imagery showing scratches, cuts, color irritations and linear artefacts. This research enabled the characterization of the spatio-temporal movement patterns of the Reissenschuh DSGSD over more than six decades. Further research will use the results as a reference for modelling the discussed multi-hazard processes.

This research was partly conducted within the project EMOD-SLAP funded by the Tyrolean Science Fund (TWF).

How to cite: Branke, J., Zieher, T., Pfeiffer, J., Bremer, M., Rutzinger, M., Gems, B., Keiler, M., and Schneider-Muntau, B.: Extending the integrated monitoring of deep-seated landslide activity into the past using free and open-source photogrammetry, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4849, https://doi.org/10.5194/egusphere-egu22-4849, 2022.

EGU22-5291 | Presentations | NH6.3

Electromagnetic anomalies detection over seismic regions during an earthquake 

Dario Recchiuti, Giulia D'angelo, Emanuele Papini, Piero Diego, Antonio Cicone, Alexandra Parmentier, Pietro Ubertini, Roberto Battiston, and Mirko Piersanti

The definition of the statistical distribution of the ionospheric electromagnetic (EM) waves energy in absence of seismic activity and other anomalous inputs (such as the ones derived by solar forcing) is a necessary step in order to determine a background in the ionospheric EM emissions over seismic regions. An EM signal which differs from the background (exceeding a statistically meaningful threshold) should be considered as a potential event to be investigated. In this work, by means of the FIF (Fast Iterative Filtering) data analysis technique, we performed a multiscale analysis of the ionospheric environmental background, using almost the entire CSES01 (China Seismo-ElectroMagnetic Satellite) electric and magnetic field dataset (2019 - 2021), by creating the map of the averaged relative energy (εrel) over a 3° x 3° latitude-longitude cell, depending on both spatial and temporal scale of the ionospheric medium.
In order to make a robust discrimination between external (atmospheric, ionospheric, magnetospheric, solar activities) and internal (earthquakes, volcanoes) sources generating anomalous signals, we took into account geomagnetic activity conditions in terms of the Sym-H index.
Here we present the results obtained for the August 14, 2021 Haitian earthquake (7.2 MW) and the September 27, 2021 Crete (Greece) earthquake (6.0 MW). 

How to cite: Recchiuti, D., D'angelo, G., Papini, E., Diego, P., Cicone, A., Parmentier, A., Ubertini, P., Battiston, R., and Piersanti, M.: Electromagnetic anomalies detection over seismic regions during an earthquake, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5291, https://doi.org/10.5194/egusphere-egu22-5291, 2022.

EGU22-5803 | Presentations | NH6.3 | Highlight

SAR-based scientific products in support to recovery from hurricanes and earthquakes: lessons learnt in Haiti from the CEOS Recovery Observatory pilot to the demonstrator 

Deodato Tapete, Francesca Cigna, Agwilh Collet, Hélène de Boissezon, Robin Faivre, Andrew Eddy, Jens Danzeglocke, Philemon Mondesir, David Telcy, Esther Manasse, Boby Emmanuel Piard, and Samuel Généa

Since 2014, the Committee on Earth Observation Satellites (CEOS) has been working on means to increase the contribution of satellite data to recovery from major disasters. The 4 year-long Recovery Observatory (RO) pilot project, led by CNIGS with technical support from CNES [www.recovery-observatory.org], was triggered to address the needs of the Haitian community in the south-west of the country involved in recovery after the impact of Hurricane Matthew in October 2016. Following that experience, the RO Concept was published in an Advocacy Paper [1] and the RO Demonstrator Team was created with the aim to activate a series of 3 to 6 ROs after major events between 2021 and late 2023 [2].

It is with regard to the RO pilot and the latest RO demonstrator activation after the 7.2 Mw earthquake and Hurricane Grace occurred in August 2021, that the following lessons learnt in Haiti are discussed:

  • technical achievements and challenges in the use of SAR data from high revisit sensors (e.g. Sentinel-1) and on-demand acquisitions from high resolution missions (e.g. COSMO-SkyMed, TerraSAR-X) for terrain motion and land surface change applications;
  • the role that the collaboration with users and stakeholders can play to add value to SAR-based scientific products;
  • capacity building and training enabling local champions and public stakeholders to effectively uptake SAR technology for their own duties of disaster risk management.

During the pilot, a wide-area regional analysis was undertaken by processing Sentinel-1 in ESA’s Geohazards Exploitation Platform [3], to identify areas affected by ground motions not suitable for reconstruction. The exercise also allowed the understanding of the factors limiting the exploitation of this resource by users (e.g. skill gap, limited internet connectivity).

The high resolution monitoring activity with ASI’s COSMO-SkyMed data, CNES’ Pléiades images and ground-truth validation over 3 priority areas defined by the Haitian users, allowed the identification of the following categories of surface changes:

(a) environmental, along the Grand’Anse River south of Jérémie, mixed with quarrying and unregulated waste disposal [4];

(b) geological, along the rock cliffs north-west of Jérémie where toppling and lateral spreading may be worsened by future disasters, thus causing potential risks to small villages and isolated dwellings;

(c) urban, within the outskirts of Jérémie due to reconstruction and new constructions in unstable areas;

(d) rural, due to landslides to be distinguished by similar signals associated with agricultural practices along the slopes in Camp Perrin.

This knowledge was used as the most up-to-date baseline to assess the impact of the August 2021 earthquake and hurricane, and the current process of recovery on south-west Haiti peninsula in the framework of the RO demonstrator activation. The RO collaborated closely with local partners and the CNIGS performed satellite based analysis of damage after the earthquake. A long-term objective of the RO remains strong capacity development of local actors.

 

References:

[1] https://www.gfdrr.org/en/publication/use-of-eo-satellites-recovery

[2] https://ceos.org/document_management/Working_Groups/WGDisasters/WGMeetings/WGDisasters_Mtg16_Virtual/CEOS_WGD16_RO_Demonstrator.pdf

[3] Cigna, F. et al. (2020) Proceedings of 2020 IEEE IGARSS, pp. 6867–6870. https://doi.org/10.1109/IGARSS39084.2020.9323231

[4] De Giorgi, A. et al. (2021) Remote Sensing, 13 (17), 3509. https://doi.org/10.3390/rs13173509

How to cite: Tapete, D., Cigna, F., Collet, A., de Boissezon, H., Faivre, R., Eddy, A., Danzeglocke, J., Mondesir, P., Telcy, D., Manasse, E., Piard, B. E., and Généa, S.: SAR-based scientific products in support to recovery from hurricanes and earthquakes: lessons learnt in Haiti from the CEOS Recovery Observatory pilot to the demonstrator, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5803, https://doi.org/10.5194/egusphere-egu22-5803, 2022.

EGU22-5958 | Presentations | NH6.3

Mapping and kinematic history of active landslides in Panachaikon Mountain, Achaia (Peloponnese, Greece) by InSAR Time Series analysis and its relationship to rainfall patterns 

Varvara Tsironi, Athanassios Ganas, Ioannis Karamitros, Eirini Efstathiou, Ioannis Koukouvelas, and Efthimios Sokos

We investigate the kinematic behaviour of active landslides at several well-known locations around the Panachaikon Mountain, Achaia (Peloponnese, Greece), using space geodetic data (InSAR/GNSS). We process LiCSAR interferograms produced by Sentinel-1 (C-band) acquisitions using the open-source software LiCSBAS and we obtain average displacement maps for the period 2016-2021. The maximum displacement rate of each landslide is located at about the centre of each landslide. The average E-W velocity of the Krini landslide is 4 cm/yr (towards east) and 1 cm/yr downwards. The line-of-sight (LOS) velocity of this landslide compares well to a co-located GNSS station within (±) 3 mm/yr (25mm/yr for InSAR and 28mm/yr for GNSS for the descending orbit). Our results also suggest that there is a correlation between rainfall and landslide motion. A cross-correlation analysis of our data suggests that the mean time lag was 13.5 days between the maximum seasonal rainfall and the change of LOS displacement rate. Also, it seems that the amount of total seasonal rainfall controls the increase of displacement rate as 40-550% changes of the displacement rate of the Krini landslide were detected, following a seasonal maximum of rainfall values at the nearby meteorological station. A large part of this mountainous region of Achaia suffers from slope instability that is manifested in various degrees of ground displacement (detectable using space geodesy) affecting greatly its morphological features and inhabited areas.

We acknowledge funding by the project PROIΟΝ “Multiparametric microsensor monitoring platform of the Enceladus Hellenic Supersite” co-financed by Greece and the European Union

How to cite: Tsironi, V., Ganas, A., Karamitros, I., Efstathiou, E., Koukouvelas, I., and Sokos, E.: Mapping and kinematic history of active landslides in Panachaikon Mountain, Achaia (Peloponnese, Greece) by InSAR Time Series analysis and its relationship to rainfall patterns, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5958, https://doi.org/10.5194/egusphere-egu22-5958, 2022.

EGU22-8990 | Presentations | NH6.3

Small size but densely distributed: Insights from a LiDAR-based manual inventory of the recent earthquake-induced landslides case in Japan 

Rasis Putra Ritonga, Takashi Gomi, Roy C. Sidle, Yohei Arata, and Rozaqqa Noviandi

Individually delineated landslide inventories are essential in analyzing post-earthquake-induced landslides (EIL) hazard assessments, particularly examining statistical correlations between landslides (e.g., frequency and size) and physical parameters. Despite rapid advances in remote sensing technology, previous recorded EIL inventories still have limitations in carrying out fine quality inventories, mainly due to limitations in delineating individual landslides manually over large areas by low-resolution satellite images. To be specific, fine quality inventory requires the ability to detect landslide scars and deposits separately over whole affected areas, recognizing smaller landslide sizes (<103 m2) under canopies, as well as avoiding amalgamations, i.e., a combination of several individual landslides in a single polygon, which can lead to severe distortion of landslide statistics. The latest technology from LiDAR-Digital Terrain Model (DTM) allows geomorphologists to manually delineate landslides precisely, but most studies had only focused on deep-seated landslides. Thus, the main objective of this study was to delineate the recent EIL based on LiDAR-DTM visualization over whole landslide-affected areas and test preliminary statistics between our manual LiDAR-based inventory (MLI) with automatic aerial-based inventory (AAI) in the same areas, in addition to NASA’s global EIL database.

We manually delineated the recent landslides affected by the 2018 Eastern Iburi earthquake in the Atsuma basin in Hokkaido within an area of 266 km2, accounting for about 90% of the total area affected by landslides. Shaded relief derived from LiDAR-DTM (0.5 m), and aerial photograph (0.2 m) were used to identify landslide morphometrics. AAI collected in the same study area (Kita, 2018) was used to compare with MLI. As a result, our MLI was able to detect a total of 17,160 landslides (total landslide area: 27.5 km2) while the automatic AAI was only 4241 landslides (total landslide area: 33 km2), probably because our MLI was able to recognize more small landslides and separate individual landslides from amalgams. The mean landslide density for MLI is four times greater (64 landslides/km2) compared to AAI (16 landslides/km2), also considered the densest landslide inventory recorded so far in 20 years based on NASA's global EIL inventory database. Based on the binned frequency area distribution (FAD), MLI has a power-law exponent (β) of 3.4 and a rollover point of 800 m2, whereas AAI is 2.7 and 3×103 m2, respectively, probably because AAI's inventory overestimates its delineation by inserting channels and depositional regions in the delineated polygons. Compared with all global EIL inventories (mean β: 2.4), the value of the MLI was found to be larger, indicating that the Iburi EIL is the smallest size EIL so far in history (50% landslides are smaller than 103 m2), but very dense. Our findings suggest that MLI might reveal hidden unexpected statistics of the number and size of EILs, including exposing smaller landslides under the canopy and splitting amalgams.

How to cite: Ritonga, R. P., Gomi, T., Sidle, R. C., Arata, Y., and Noviandi, R.: Small size but densely distributed: Insights from a LiDAR-based manual inventory of the recent earthquake-induced landslides case in Japan, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8990, https://doi.org/10.5194/egusphere-egu22-8990, 2022.

EGU22-9066 | Presentations | NH6.3

A fuzzy multi-criteria decision tree model for flood hazard assessment in the Dhemaji district of the state of Assam in India 

Diganta barman, Anupal baruah, Arjun bm, and Shiv prasad aggarwal

Flood in the North-Eastern part of India is a chronic event occurring from the River Brahmaputra and its tributaries and causes immense loss to the human life and property. Particularly, during the monsoon period, the north bank tributaries cause havoc on the nearby regions especially in the Dhemaji District. These tributaries mainly originate from the glacier fed regions and inundate the different locations of the Dhemaji district. In this work a fuzzy multi-criteria decision analysis model is developed to prepare the flood hazard map of the Dhemaji district. Six different layers are considered in the analysis such as elevation profile, Flood occurrence period, River confluence points of the second order tributaries, historical embankment breach locations, normalized difference vegetation index and normalized difference moisture index. The outputs from the model are categorized into very low to high hazard zone. The consistency ratio calculated from the assigned weights is found as 0.092. The computed flood hazard map from the present model is compared with the observed flood occurrence events and found to be realistic and satisfactory.

Keywords: Fuzzy AHP, Multi criteria decision analysis, Flood occurrence, Embankment breach, River confluence points

How to cite: barman, D., baruah, A., bm, A., and aggarwal, S. P.: A fuzzy multi-criteria decision tree model for flood hazard assessment in the Dhemaji district of the state of Assam in India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9066, https://doi.org/10.5194/egusphere-egu22-9066, 2022.

EGU22-9856 | Presentations | NH6.3

Satellite-derived shorelines extracted using SAET for characterizing the effect of Storm Gloria in the Ebro Delta (W Mediterranean) 

Josep E. Pardo-Pascual, Carlos Cabezas-Rabadán, Jesús Palomar-Vázquez, Alfonso Fernández-Sarría, Jaime Almonacid-Caballer, Paola Emilia Souto-Ceccon, Juan Montes, Clara Armaroli, and Paolo Ciavola

Coastal storms constitute a key factor controlling shoreline position changes. They may deeply modify the beach morphology and contribute to erosive processes. Earth observation data as the images from the Sentinel satellites of ESA's Copernicus program and the Copernicus Contributing Missions offer potential information for characterizing beach changes.

SAET (Shoreline Analysis and Extraction Tool) is an open-source tool developed within the framework of the ECFAS project intended to enable the automatic shoreline extraction from optical satellite imagery. SAET is assessed in order to determine the accuracy of the resulting satellite-derived shorelines (SDSs) as well as its capacity to detect and characterise beach changes. The SDSs are employed to define the changes of the shoreline position through 82 km of beaches in the Ebro Delta (E Spain) associated with Storm Gloria. The storm peaked on 22 of January 2020 (significant wave heights over 7 m), heavily affecting the whole of eastern Spain.

The accuracy of the SDS extracted using SAET was assessed by comparing its position against the shoreline photo-interpreted on a VHR image. A Spot 7 (1.5 m of spatial resolution) acquired 37 minutes before the Sentinel-2 used for defining the SDS was employed for this purpose. Both images were acquired on 26 of January, four days after the peak of the storm. An average error of 5.18 m (seawards) ± 9.98 m was measured.

The comparison of the position of the SDS obtained before (18/01/2020) and after the peak of the storm (26/01/2020) allows to map the retreat of the shoreline position linked to this event. Within the ECFAS project this approach will be extended to a number of other test cases.

The ECFAS (European Coastal Flood Awareness System) project (https://www.ecfas.eu/) has received funding from the EU H2020 research and innovation programme under Grant Agreement No 101004211.

How to cite: Pardo-Pascual, J. E., Cabezas-Rabadán, C., Palomar-Vázquez, J., Fernández-Sarría, A., Almonacid-Caballer, J., Souto-Ceccon, P. E., Montes, J., Armaroli, C., and Ciavola, P.: Satellite-derived shorelines extracted using SAET for characterizing the effect of Storm Gloria in the Ebro Delta (W Mediterranean), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9856, https://doi.org/10.5194/egusphere-egu22-9856, 2022.

EGU22-9857 | Presentations | NH6.3

SAET: a new tool for automatic shoreline extraction with subpixel accuracy for characterising shoreline changes linked to coastal storms 

Jesús Palomar-Vázquez, Jaime Almonacid-Caballer, Carlos Cabezas-Rabadán, and Josep E. Pardo-Pascual

SAET (Shoreline Analysis and Extraction Tool) is a tool intended to enable the automatic detection and quantification of the changes experienced by the shoreline position on beaches affected by coastal storms. It is an open-source tool developed within the framework of the ECFAS project which aims to demonstrate the technical and operational feasibility of a European Coastal Flood Awareness System.  SAET takes advantage of the freely-available images from the Sentinel satellites of ESA's Copernicus program and the Copernicus Contributing Missions. The tool currently uses the mid-resolution images of the Sentinel 2 and Landsat 8 satellites, although in the future it will allow the use of images from other satellites (as the recently available Landsat 9).

In order to characterize the shoreline changes caused by a coastal storm at a certain coastal segment, SAET identifies, downloads, and processes the most suitable satellite images (those closest in time and with low cloud coverage). The shoreline extraction starts by an approximate definition of the shoreline position at pixel level using the AWEINSH water index. Subsequently, the subpixel extraction algorithm is applied over dynamic coastal stretches not affected by clouds operating over the Short-Wave Infrared bands. For each of the analysed images, the process results in the obtention of satellite-derived shorelines in vector format. Analysis of shoreline position changes is intended to offer quantitative data about the state of beaches in terms of erosion/accretion,and about their response subsequent capacity to recover after storm episodes.

 

The ECFAS (European Coastal Flood Awareness System) project (https://www.ecfas.eu/) has received funding from the EU H2020 research and innovation programme under Grant Agreement No 101004211.

How to cite: Palomar-Vázquez, J., Almonacid-Caballer, J., Cabezas-Rabadán, C., and Pardo-Pascual, J. E.: SAET: a new tool for automatic shoreline extraction with subpixel accuracy for characterising shoreline changes linked to coastal storms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9857, https://doi.org/10.5194/egusphere-egu22-9857, 2022.

EGU22-9901 | Presentations | NH6.3 | Highlight

Thunderslide - from rainfall to preliminary landslide mapping: an automated open-data workflow for regional authorities 

Stefano Crema, Alessandro Sarretta, Donato Maio, Francesco Marra, Giorgia Macchi, Velio Coviello, Marco Borga, Lorenzo Marchi, and Marco Cavalli

Gathering systematic information on the effects of extreme weather events (e.g., floods, landslides and debris flows, windthrows) is a fundamental prerequisite to establishing rapid-response strategies and putting into practice management policies. However, the collection of field data requires significant economic and human efforts by local authorities. Furthermore, events occurring in remote areas are rarely detected and mapped accurately as they have a low chance of intersecting human infrastructures. These missed detections lead to incorrect assumptions in relation to both the extreme events’ spatial distribution and, especially, the real occurrence probability. This work proposes a framework for obtaining the automatic identification of severe weather events that may have caused important erosion processes or vegetation damage, combined with a rapid preliminary change detection mapping over the identified areas. The proposed approach leverages the free availability of both high-resolution global scale radar rainfall products and Sentinel-2 multi-spectral images to identify the areas to be analyzed and to carry out change detection algorithms, respectively. Radar rainfall data are analyzed and the areas where high-intensity rainfall and/or very important cumulative precipitation has occurred, are used as a mask for restricting the subsequent analysis, which, in turn, is based on a multi-spectral change detection algorithm. The whole procedure feeds a geodatabase (storing identified events, retrieved data and computed changes) for proper data management and subsequent analyses. The testing phase of the proposed methodology has provided encouraging results: applications to selected mountain catchments hit by intense events in northeastern Italy were capable of recognizing flooded areas, debris-flow and shallow landslide activations, and windthrows. The described approach can serve as a preliminary step toward detailed post-event surveys, but also as a preliminary “quick and dirty” mapping framework for local authorities especially when resources for ad hoc field surveys are not available, or in the case of an event that triggers changes in remote areas. Such a systematic potential change identification can help for a more homogeneous and systematic detection and census of the events and their effects. The workflow herein presented is intended as a starting point on top of which more modules can be added (e.g., radar climatology, SAR change detection for near real-time, other severe sources such as lightning, earthquakes or wildfires, machine learning algorithms for image classification, land use and morphological filtering of the results). Future improvements of the described procedure could be finally devised for allowing a continuous operational activity and for maintaining an open-source software implementation.

How to cite: Crema, S., Sarretta, A., Maio, D., Marra, F., Macchi, G., Coviello, V., Borga, M., Marchi, L., and Cavalli, M.: Thunderslide - from rainfall to preliminary landslide mapping: an automated open-data workflow for regional authorities, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9901, https://doi.org/10.5194/egusphere-egu22-9901, 2022.

EGU22-10149 | Presentations | NH6.3

Differential SAR interferometry for estimating snow water equivalent in central Apennines complex orography from Sentinel-1 satellite within SMIVIA project 

Gianluca Palermo, Edoardo Raparelli, Nancy Alvan Romero, Mario Papa, Massimo Orlandi, Paolo Tuccella, Annalina Lombardi, Errico Picciotti, Saverio Di Fabio, Elena Pettinelli, Elisabetta Mattei, Sebastian Lauro, Barbara Cosciotti, David Cappelletti, Massimo Pecci, and Frank Marzano

Snow-mantle extent (or area), its local thickness (or height) and mass (often expressed by the snow water equivalent, SWE) are the main parameters characterizing snow deposits. Such parameters result of particular importance in meteorology, hydrology, and climate monitoring applications. The considerable geographical extension of snow layers and their typical spatial heterogeneity makes it impractical to monitor snow by means of direct or indirect in situ measurements, suggesting the exploitation of satellite technologies. Space-borne C-band synthetic aperture radar (SAR) sensors (such as those operating in Sentinel-1 A and B missions) are particularly suitable for the analysis of snow deposits, providing data with resolutions up to some meters with global coverage and 6-day revisit time. Most of the satellite remote sensing applications have been focused on major mountain systems, such as the Andes, the Alps, or the Himalayan region. Other important mountain systems, like the Italian Apennines, have not been extensively considered, probably due to their complex orography and the high variability of their snow cover. Nevertheless, the central Apennine has a central role for the meteorological dynamics in the Mediterranean area, and it hosts the southernmost European glacier – namely, the Calderone glacier whose evolution represents a relevant indicator, at least for the medium latitudes, of climatic changes.

The implementation of the objectives of the SMIVIA (Snow-mantle Modeling, Inversion and Validation using multi-frequency multi-mission InSAR in central Apennines) project is based on the development of innovative simulation techniques and snow parameter estimators from SAR and differential interferometric SAR (DInSAR) measurements, based on the synergy with spatial measurements from optical remote sensing sensors, data from ground weather radar and simulations from dynamic snow cover models and on an inverse problem approach with a robust physical-statistical rationale. Furthermore, the scientific validity of the achievable results is supported by an enormous systematic validation effort in the Apennine area with in-situ measurements, identifying 3 pilot sites manned with meteorological and snow measurements, dielectric and georadar measurements, trenches and micro-macrophysical sampling, 6 sites of semi-automatic verification, 31 remote auxiliary sites and 1 site of glaciological interest (Calderone) with ad hoc campaigns. SAR data processing can be performed in different ways to retrieve snow parameters.

In this work we exploit SAR backscattering coefficient to study the effects of backscattering at the air-snow interface, at the snow-ground interface, together with the volumetric effects of the snow layer. The distinction between wet and dry snow is obtained exploiting the copolar and cross-polar SAR returns. DInSAR is exploited to analyze the effects of air-snow refraction and the snow-ground reflection, together with the coherence and phase-shifts between two sequential images. In this work we will present the Sentinel-1 DInSAR processing chain to estimate snowpack height (SPH) combined with SAR-backscattered data for wet snow discrimination. The potential of using physically based analytical and statistical inversion algorithms, trained by forward electromagnetic and snowpack models, is introduced, and discussed. The processing chain is tested in central Apennines, using validation sites with snow-pit in-situ measurements, discussing potential developments and critical issues. 

How to cite: Palermo, G., Raparelli, E., Alvan Romero, N., Papa, M., Orlandi, M., Tuccella, P., Lombardi, A., Picciotti, E., Di Fabio, S., Pettinelli, E., Mattei, E., Lauro, S., Cosciotti, B., Cappelletti, D., Pecci, M., and Marzano, F.: Differential SAR interferometry for estimating snow water equivalent in central Apennines complex orography from Sentinel-1 satellite within SMIVIA project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10149, https://doi.org/10.5194/egusphere-egu22-10149, 2022.

EGU22-10737 | Presentations | NH6.3

Correlation analysis between the subsides reported as sinkholes and the thickness of the clays of the shallow aquifer in Mexico City (CDMX). 

Sergio García Cruzado, Nelly Ramírez Serrato, Graciela Herrera Zamarrón, Fabiola Yépez Rincón, Mario Hernández Hernández, José Hernandez Espriu, and Victor Velasco Herrera

CDMX is the capital of the country. This town has historically been at risk of subsidence damage to its civil structures due to its foundation. The area began to be populated with settlements in flooded areas for use in crops, followed by the colonization and subsequent drying out of the lake areas that ended up being urbanized. The areas formerly belonging to Lake Texcoco, previously used for cultivation, were drained to expand the developable coverage. As the water demand grew, it became necessary to extract groundwater from the shallow aquifer to supply the growing city. Although the depletion of this aquifer coincides with subsidence areas, previous studies indicate that there is no linear correlation between them. The objective of this project is to collect the different criteria related to the presence of sinkholes (as an effect of subsidence), such as Population and well density, distance to faults, fractures, roads, drainage, elevation and slope of the terrain, the thickness of subsoil clays, the type of rock and soil, the rate of subsidence and the geotechnical zones in the study area.

The criteria maps were compared with previous sinkholes mapping registered between the years 2017 to 2019. The statistics consisted of calculating the percentage of coincidence in coverage, categorized linear regression, and the application of logarithms as a normalization method to evaluate its correlation. The statistics consisted of calculating the percentage of coincidence in coverage, categorized linear regression, and the application of logarithms as a normalization method to evaluate its correlation. The most relevant results include the relationship between the sinkholes and the road zones (60%), the highest correlation registered in clays is 0.437 considering areas of competent rock. Although considering the total study site a 0.36 is reached, obtained from applying the logarithm of the clay values ​​and correlating it with the sinkhole areas.

 
  • 1Facultad de Ingeniería, Colegio de Geofísica, BUAP, Puebla, Mexico
  • 2Laboratorio de Percepción Remota, Departamento de Recursos Naturales, Instituto de Geofísica, UNAM, CDMX, México
  • 3Departamento de Recursos Naturales, Instituto de Geofísica, UNAM, CDMX, México
  • 4Facultad de Ingeniería Civil, Universidad Nacional Autónoma de México, CDMX, México
  • 5Consejo Nacional de Ciencia y Tecnología, Cátedras CONACYT- Instituto de Geofísica, UNAM, CDMX, México
  • 6Facultad de Ingeniería, UNAM, CDMX, México
  • 7Instituto de Geofísica, UNAM, CDMX, México

How to cite: García Cruzado, S., Ramírez Serrato, N., Herrera Zamarrón, G., Yépez Rincón, F., Hernández Hernández, M., Hernandez Espriu, J., and Velasco Herrera, V.: Correlation analysis between the subsides reported as sinkholes and the thickness of the clays of the shallow aquifer in Mexico City (CDMX)., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10737, https://doi.org/10.5194/egusphere-egu22-10737, 2022.

EGU22-10867 | Presentations | NH6.3

Comparing Academia's Perception of Needed SDG Research to SDG Progress Reports and Known SDG Synergies and Tradeoffs 

Hannah Chaney, Majdi Abou Najm, and Maria Jose Lopez Serrano

The Sustainable Development Goals (SDG) are a set of 17 goals that was released by the United Nations (UN) in 2015. Each goal has a target figure that countries and, ideally, the world should aim to reach in order to create sustainability within that sector for current and future generations. Seven years after the SDGs were released, thousands of studies and academic articles have promoted the SDGs, as well as regular updates that have been released by the UN on goal progress specific to each country. In addition, multiple studies have highlighted synergies and tradeoffs between SDGs that have the potential to significantly influence goal completion (Biggeri et. Al, 2019; Moyer & Bohl, 2019; Jose-Serrano, 2022; Zhao et. al, 2021). With this information in mind, this study aims to conduct a large-scale network analysis of research articles concerning SDG progress to answer the following questions: Which SDGs receive the most attention from researchers? What are the perceptions in academia regarding the synergies/ trade-offs between the SDGs? The network analysis will be conducted using the search engine SCOPUS resulting in hundreds of retrieved papers for each category within the SDGs. Results from this study will be compared to current SDG progress and known synergies and tradeoffs within the SDGs in order to determine how the perception of the SDGs compare with research conclusions and known SDG goal progress. This information will serve as an indication of which goals, synergies, or tradeoffs researchers and industries are aware of and readily researching and which of these categories needs more attention within academic circles. The ultimate goal for this research is that the results can be used as a tool to advocate for what SDG research is most needed in order for SDG goals to reach completion by 2030.

How to cite: Chaney, H., Abou Najm, M., and Jose Lopez Serrano, M.: Comparing Academia's Perception of Needed SDG Research to SDG Progress Reports and Known SDG Synergies and Tradeoffs, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10867, https://doi.org/10.5194/egusphere-egu22-10867, 2022.

EGU22-12363 | Presentations | NH6.3 | Highlight

MIPS: a new airborne Multiband Interferometric and Polarimetric SAR system for the Italian territory monitoring 

Antonio Natale, Paolo Berardino, Gianfranco Palmese, Carmen Esposito, Riccardo Lanari, and Stefano Perna

Synthetic Aperture Radar (SAR) systems represent nowadays standard tools for the high resolution Earth observation in all weather conditions [1].

Indeed, thanks to well established techniques based on SAR data, such as SAR interferometry (InSAR), Differential InSAR (DInSAR) and SAR polarimetry (PolSAR), it is possible to generate added-value products, as for instance Digital Elevation Models, ground deformation maps and time series, soil moisture maps, and exploit these systems for the remote monitoring of both natural and anthropic phenomena [2] - [5].

In addition, recent advancements in radar, navigation and aeronautical technologies allow us to benefit of lightweight and compact SAR sensors that can be mounted onboard highly flexible aerial platforms [6] - [7]. These aspects offer the opportunity to design novel observation configurations and to explore innovative estimation strategies based, for instance, on data provided by multi-frequency, multi-polarization, multi-antenna or even multi-platform SAR systems.

This work is aimed at showing the imaging capabilities of the new Italian airborne SAR system named MIPS (Multiband Interferometric and Polarimetric SAR).

The system is based on the Frequency Modulated Continuous Wave (FMCW) technology and is able to operate at both L- and X- band. In particular, the L-band sensor is able to acquire fully-polarized radar data, while the X-band sensor exhibits single-pass interferometric SAR capabilities.

A detailed description of both the MIPS system and its imaging capabilities will be provided at the conference time, with a special emphasis given to the activities carried out within the ASI-funded DInSAR-3M project.

 

References

[1] A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, K. P. Papathanassiou, “A tutorial on Synthetic Aperture Radar”, IEEE Geoscience and Remote Sensing Magazine, pp. 6-43, March 2013.

[2] Bamler, R., Hartl, P., 1998. Synthetic Aperture Radar Interferometry. Inverse problems, 14(4), R1.

[3] P. Berardino, G. Fornaro, R. Lanari and E. Sansosti, “A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms”, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 11, pp. 2375-2383, Nov. 2002.

[4] Lee, J., Pottier, E., 2009. Polarimetric Radar Imaging: From Basics to Applications. CRC Press, New York.

[5] R. Lanari, M. Bonano, F. Casu, C. De Luca, M. Manunta, M. Manzo, G. Onorato, I. Zinno, “Automatic Generation of Sentinel-1 Continental Scale DInSAR Deformation Time Series through an Extended P-SBAS Processing Pipeline in a Cloud Computing Environment”, Remote Sensing, 2020, 12, 2961.

[6] S. Perna, G. Alberti, P. Berardino, L. Bruzzone. D. Califano, I. Catapano, L. Ciofaniello, E. Donini, C. Esposito, C. Facchinetti, R. Formaro, G. Gennarelli, C. Gerekos, R. Lanari, F. Longo, G. Ludeno, M. Mariotti d’Alessandro, A. Natale, C. Noviello, G. Palmese. C. Papa, G. Pica, F. Rocca, G. Salzillo, F. Soldovieri, S. Tebaldini, S. Thakur, “The ASI Integrated Sounder-SAR System Operating in the UHF-VHF Bands: First Results of the 2018 Helicopter-Borne Morocco Desert Campaign”, Remote Sensing, 2019, 11(16), 1845.

[7] C. Esposito, A. Natale, G. Palmese, P. Berardino, R. Lanari, S. Perna, “On the Capabilities of the Italian Airborne FMCW AXIS InSAR System”, Remote Sens. 2020, 12, 539.

How to cite: Natale, A., Berardino, P., Palmese, G., Esposito, C., Lanari, R., and Perna, S.: MIPS: a new airborne Multiband Interferometric and Polarimetric SAR system for the Italian territory monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12363, https://doi.org/10.5194/egusphere-egu22-12363, 2022.

The increasing diffusion of the PS (Persistent Scatterers) InSAR services across the world and the early adoption of PS-Monitoring techniques, provide to the civil protection authorities effective and objective tools for disaster risk prevention, empowering the capability to detect early-stage terrain deformations even in unpopulated areas.

More in detail, the PS Monitoring technique exploits the high temporal resolution provided by the recent satellite constellations (e.g. Sentinel 2), with revisitation times of about 14 days  detecting, at a regional scale, the so called “anomalies” (i.e.: the Persistent Scatterers which show acceleration trends compared to a given deformation trend). Considering that the deformation anomalies could be provoked by many factors not related to an incipient landslide, the so-called “false positives”, terrain investigations are usually required to assess a real landslide hazard .

Furthermore, to be effective, the terrain investigations aimed at validating a potential incipient landslide situation should be conducted within a short time, to allow an effective implementation of the safety measures by the civil protection authorities.

Many constraints such as the limited availability of human resources and terrain conditions usually hamper an extensive terrain validation of the anomalies provided by PS-InSAR monitoring services. It is thus necessary a fast and objective method to filter and prioritize the terrain deformation anomalies which have the highest probability to indicate an incipient landslide, and require an immediate terrain investigation.

To make that possible, we developed a semiautomated GIS-based information system, called ARTEMIS (Advanced Regional TErrain Motion Information System), which allows an objective and fast selection of the PS InSAR anomalies to be investigated, detected twice a month by the PS-Monitoring services.

The ARTEMIS is a multi-stage workflow operating a preliminary validation of the anomaly itself, followed by a danger assessment stage and a final risk-assessment stage. At the end of the process, a risk-rating score to prioritize the field investigation is provided. 

ARTEMIS is a flexible and scalable tool, which can be adapted to different geographical realities and PS-Monitoring services. Its workflow is openly available for non-commercial use.

How to cite: Bertolo, D., Stra, M., and Thuegaz, P.: ARTEMIS – An operational tool to manage the information provided by Persistent Scatterers Monitoring at a regional scale., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12393, https://doi.org/10.5194/egusphere-egu22-12393, 2022.

EGU22-12951 | Presentations | NH6.3

ETNA 2021 13th December eruption: does SEVIRI data contribute to the early detection of lateral event? 

Massimo Musacchio, Malvina Silvestri, Giuseppe Puglisi, and Maria Fabrizia Buongiorno

Infrared remotely sensed data can be used to evaluate the surface thermal state of active volcanoes. Because the spectral radiance emitted by hot spots reaches its maximum in the region of Mid Infra-Red (MIR), the early detection of an impending eruption has been highlighted by exploiting the SEVIRI 3.9 mm channel. Despite its spatial resolution (3x3 sqkm at nadir), the presence of a high temperature source, even affecting only a small portion of one large pixel, causes a dramatic increase of the emitted MIR radiance easily detectable also at 4x5 sqKm (mid latitude).

The procedure named MS2RWS (MeteoSat to Rapid Response Web Service) allowed us to identify the Mt Etna summit area eruption since February 2010, when it was developed to detects the beginning and to estimates the duration of an eruption [1,2]. The procedure starts from the assumption that in a remote sensing image a pixel may assume a limited number of radiance values ranging from 0 up to the saturation. The radiance of a given pixel, in clear sky condition and no eruption ongoing, follows a characteristic Gaussian trend related to the Sun elevation and this trend varies during an eruption affecting, in particular, the pixel centred over the summit Mt. Etna craters [3].

On 13th December 2021 an eruptive vent opened in the eastern flank of Mt. Etna volcano, at an elevation of 2100 m a.s.l., about 3.5 km far from the summit craters. This eruption lasted only one day and produced a small lava flows (less than 1 km length). Thus it might be considered as a “punctual event” in the eruptive history of the volcano and ideal for validating the capability of the MS2RWS procedure in detecting flank eruptions since their beginning. This experiment succeed, demonstrating that the MS2RWS procedure has the capability to detect also lateral eruption, as this was, giving a further contribute on the monitoring of volcanic activity by space.

How to cite: Musacchio, M., Silvestri, M., Puglisi, G., and Buongiorno, M. F.: ETNA 2021 13th December eruption: does SEVIRI data contribute to the early detection of lateral event?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12951, https://doi.org/10.5194/egusphere-egu22-12951, 2022.

There’s still a poor understanding of how submarine volcanism works, although the majority of Earth’s volcanic activity happen in submarine context, forming new crust and ejection large amounts of material into the ocean.

This type of eruption has associated risks such as tsunamis and problems with shipping and air traffic, and is a source of natural pollution - gases such as sulphur and particulates are released into the atmosphere - hence the need for monitoring. Also, the study of submarine volcanic products will help understand in more detail how these volcanic processes evolve. Due to the remote location of submarine volcanoes, the use of remote sensing and earth observation techniques can be helpful in the monitoring process in order to mitigate the consequences of volcanic activity.

To answer this problem, a database of pre-registered submarine volcanic eruptions between 2000 and 2018 was created, with results stating 60 eruptions referring to 31 different volcanoes. A total of 450 satellite images were detected through observations of discoloration plumes associated with submarine events, and 82 of these images were subsequently selected for extraction of spectral signature, through what were considered to be the most representative images for the eruption in question, in order to proceed to the extraction of spectral signatures.

The spectral signature of the 263 sample points has similar characteristics within the different types of discoloration plumes (green coloration, brown coloration, and associated with pumice rafts) and can therefore be classified into several classes.

It can be concluded that the detection and differentiation of discoloration plumes associated with submarine volcanic events using remote sensing data can be accomplished effectively, confirming why remote sensing is an efficient and affordable technique for the regular detection, monitoring, and study of submarine volcanic eruptions in near-real time.

How to cite: Domingues, J. R., Mantas, V., and Pereira, A.: Characterisation of discolouration plumes resulting from submarine volcanism using remote sensing techniques between 2000 and 2018, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13029, https://doi.org/10.5194/egusphere-egu22-13029, 2022.

EGU22-13041 | Presentations | NH6.3

Application of the split range-spectrum method and GACOS model to correct the ionospheric and tropospheric delay of the InSAR time series 

Michał Tympalski, Marek Sompolski, Anna Kopeć, and Wojciech Milczarek

Synthetic aperture radar interferometry (InSAR) is an effective tool for large area measurements and analysis, including topography measurements or ground surface subsidence caused by mining operations, earthquakes, or volcanic activity. However, the accuracy of these measurements is often limited by the disturbances that arise during the microwave propagation process in the ionosphere and troposphere. The atmospheric delay in the interferometric phase may cause the detection of terrain surface changes to be impossible or significantly distorted.  In our proposed approach, we propose a complete workflow to computing a time series from raw data obtained with the Sentinel-1 mission. The solution consists of a Small Baseline Subset (SBAS) algorithm with an implementation of the split range-spectrum method and the Generic Atmospheric Correction Online Service (GACOS) model. The proposed solution was used in time series calculations of SAR data in two areas: northern Chile and Taiwan. It is demonstrated that simultaneous allowance for both the tropospheric and ionospheric corrections significantly improves final results.

How to cite: Tympalski, M., Sompolski, M., Kopeć, A., and Milczarek, W.: Application of the split range-spectrum method and GACOS model to correct the ionospheric and tropospheric delay of the InSAR time series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13041, https://doi.org/10.5194/egusphere-egu22-13041, 2022.

EGU22-13232 | Presentations | NH6.3

Land degradation risk assessment using NDVI Landsat derived images – application in the hilly area of NE Romania 

Georgiana Văculișteanu, Mihai Ciprian Margarint, and Mihai Niculita

Land degradation represents a complex concept to quantify, especially in today's global context of climate change. During the last decades, a reduction of land quality has been recorded globally, and literature indicates that climate change and human activities are the most significant factors. To properly assess and mitigate this global problem, several remote sensing techniques are developed mainly to classify the grassland quality, which became a valuable indicator of the state of land degradation.

Nowadays, remote sensing indices are used to evaluate and predict scenarios in matters of land degradation state and evolution. Hence, land cover changes, desertification and deforestation, drought monitoring, soil erosion, and salinization are successfully analyzed using the Normalized Difference Vegetation Index (NDVI). This index is the most efficiently used vegetation indicator to detect the vegetation dynamics and other problem-related to this phenomenon.

Our study aims to analyze the grassland dynamic to assess the land degradation risk in the north-eastern lowlands of Romania. During the last century, the area was characterized by successive land reforms that translated to a heterogeneous diversity of grassland exploitation. The socio-economic development has brought, besides land management deficiencies, many other problems related to land ownership, land abandonment, mowing frequency, or grazing intensity. To fulfill our objective, we use the 30m spatial resolution Landsat satellite archive within the Google Earth Engine platform to detect and monitor the regions with high fluctuation of the NDVI values. The investigated period starts in 2000 until 2021.

Correlating the historical background evolution of the land use in NE Romania, with the NDVI time series and the climatic data, has revealed that both human-induced activities and climate change are impacting the grassland dynamics. The mismanagement of the land use intensification process has led to degradation and irreversible changes inside the ecosystem.

How to cite: Văculișteanu, G., Margarint, M. C., and Niculita, M.: Land degradation risk assessment using NDVI Landsat derived images – application in the hilly area of NE Romania, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13232, https://doi.org/10.5194/egusphere-egu22-13232, 2022.

Wet deposition has been identified as a critical impactor for the modelling of 137Cs in the Fukushima Daiichi Nuclear power plant (FDNPP) accident. However, it is difficult to simulate due to the involvement of close interaction between various complicated meteorological and physical processes during the wet deposition process. The limitation of measurement of the in-cloud and below-cloud scavenging also contribute to the uncertainty in wet deposition modeling, leading to the great variation of 137Cs wet deposition parameterization. These variations can be amplified further by inaccurate meteorological input, making simulation of radionuclide transport sensitive to the choice of wet scavenging parameterization. Moreover, simulations can also be influenced by differences between radionuclide transport models, even if they adopt similar parameterization for wet scavenging. Although intensively investigated, wet deposition simulation is still subject to uncertainties of meteorological inputs and wet scavenging modeling, leading to biased 137Cs transport prediction.

To improve modeling of 137Cs transport, both in- and below-cloud wet scavenging schemes were integrated into the Weather Research and Forecasting-Chemistry (WRF-Chem) model, yielding online coupled modeling of meteorology and the two wet scavenging processes. Overall, 25 combinations of different in- and below-cloud scavenging schemes of 137Cs, covering most wet scavenging schemes reported in the literature, were integrated into WRF-Chem. Additionally, two microphysics schemes were compared to improve the simulation of precipitation. These 25 models and the ensemble mean of 9 representative models were systematically compared with a previous below-cloud-only WRF-Chem model, using the cumulative deposition and atmospheric concentrations of 137Cs measurements. The findings could elucidate the range of variation among these schemes both within and across the five in-cloud groups, reveal the behaviors and sensitivities of different schemes in different scenarios.

The results revealed that the Morrison's double moment cloud microphysics scheme improves the simulation of rainfall and deposition pattern. Furthermore, the integration of the in-cloud schemes in WRF-Chem substantially reduces the bias in the cumulative deposition simulation, especially in the Nakadori and Tochigi regions where light rain dominated. For atmospheric concentration of 137Cs, those models with in-cloud schemes that consider cloud parameters showed better and more stable performance, among which Hertel-Bakla performed best for atmospheric concentration and Roselle-Apsimon performed best for both deposition and atmospheric concentration. In contrast, the in-cloud schemes that rely solely on rain intensity were found sensitive to the meteorological conditions and showed varied performance in relation to the plume events examined. The analysis based on the spatial pattern shows that the Roselle scheme, which considers cloud liquid water content and depth, can achieve a more balanced allocation of 137Cs between the air and the ground in these two cases than that achieved by the empirical power function scheme Environ. The ensemble mean achieves satisfactory performance except for one plume event, but still outperforms most models. The range of variation of the 25 models covered most of the measurements, reflecting the reasonable capability of WRF-Chem for modeling 137Cs transport.

How to cite: Zhuang, S., Dong, X., and Fang, S.: Sensitivity analysis on the wet deposition parameterization for 137Cs transport modeling following the Fukushima Daiichi Nuclear Power Plant accident, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-177, https://doi.org/10.5194/egusphere-egu22-177, 2022.

The nuclear emergency response for accidental release around the nuclear power plant site (NPPs) requires a fast and accurate estimate of the influence caused by gaseous hazardous pollutants spreading, which is critical for and preventing protecting lives, creatures, and the environment. However, as usual, the NPPs is consist of dense buildings and multi-type terrain, e.g. river and mountain, which poses challenges to atmospheric dispersion calculation for response tasks. Micro-SWIFT SPRAY (MSS) comprises both the diagnostic wind model and the dispersion model, which enables the airflows and atmospheric dispersion simulation with the meteorological and other inputs. For a small-scale scenario, especially, the separate module for obstacles influence modeling provides the potential capability of precise atmospheric dispersion. But the error behavior of such a scenario around a nuclear power plant site with complex topography remains to be further demonstrated. In this study, MSS is comprehensively evaluated against a wind tunnel experiment with a 1:600 scale for the small-scale (3 km × 3km) atmospheric dispersion modeling. Tens of buildings located in this scenario of a NPPs surrounded by a mountain and river. The evaluations for diagnostic wind modeling include the speed, direction, and distribution of horizontal airflows and vertical profile of speed at a representative site. And for the concentration calculation, horizontal distribution, axis profile, and vertical profile at a representative site. The results demonstrate the MSS can reproduce fine airflows near the buildings but overestimate the wind speed. The maximum deviation of vertical speed is around 2.09 m/s at the representative site. The simulated plume of concentration reproduces the highest concentration place and matches the observations well. The axis profile of concentration is underestimated and the vertical profile displays an increasing deviation with the height increase. Compared with the observations, the FAC5 and FAC2 of concentration simulation reach 0.945 and 0.891 in the entire calculation domain, which convinces the performance of MSS in small-scale modeling.

How to cite: Dong, X., Zhuang, S., and Fang, S.: Micro-SWIFT SPRAY modeling of atmospheric dispersion around a nuclear power plant site with complex topography, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-190, https://doi.org/10.5194/egusphere-egu22-190, 2022.

EGU22-666 | Presentations | GI2.3

Dry deposition velocity of chlorine 36 on grassland 

Sourabie Deo, Didier Hebert, Lucilla Benedetti, Elsa Vitorge, Beatriz Lourino Cabana, Valery Guillou, and Denis Maro

Chlorine 36 (36Cl, T1/2 = 301,000 years) is a radionuclide with natural and anthropogenic origin that can be rejected accidentally during decommissioning of nuclear power plants or chronically during recycling of nuclear waste. Once emitted into the atmosphere, 36Cl (gas and particles) can be transferred to the soil and vegetal cover by dry and wet deposition. However, knowledge of these deposits is very scarce. Because of its relatively high mobility in the geosphere and its high bioavailability, 36Cl fate in the environment should be studied for environmental and human impact assessments. So, the objective of this work is to determine the dry deposition rates of chlorine 36 on grassland. Grass is studied, as it is a link in the human food chain via cow's milk.

In order to achieve this objective, a method for extracting the chlorine contained in plant leaves has been developed. This method consists in heating the dried and grounded plant sample in presence of sodium hydroxide. A temperature gradient up to 450°C allows the extraction to be carried out in two stages: (i) The chlorides with a strong affinity for alkaline environments are first extracted from the plant and preserved in sodium hydroxide; (ii) The organic matter is then destroyed by combustion and the sodium hydroxide crystallised. Brought out from the oven, the dry residue is dissolved in ultrapure water and chemically prepared for the measurement of chlorine 36. This extraction method was validated by its application to NIST standards of peach and apple leaves. The average extraction efficiency of chlorides was 83 ± 3%.

For the determination of dry deposition rates, 1m2 of grass was exposed every 2 weeks at the IRSN La Hague technical platform (PTILH) located 2 km downwind from Orano la Hague, a chronic source of low-level chlorine 36 emissions. A mobile shelter with automatic humidity detection covered the grass during rainy episodes. In proximity to the grass, atmospheric chlorine was also sampled at the same frequency as the grass. Gaseous chlorine was sampled by bubbling in sodium hydroxide and by an AS3000 sampler containing activated carbon cartridge. Particulate chlorine was collected on a composite (teflon and glass fibre) filter. Chlorine 36 was measured by accelerated mass spectrometry ASTER (Accelerator for Earth Sciences, Environment and Risks) at CEREGE, Aix-en-Provence, France. All samples were subjected to a succession of chemical preparations in order to remove the sulphur 36 (an isobaric interferent) and to collect the chlorides in the form of AgCl pastilles. The results show a chlorine 36 deposition flux on the grass of 2.94.102 at/m2.s with a deposition velocity in dry weather vd(gas+particles) = 8.10-4 m/s for a contribution of 65.5% of particulate chlorine 36 and 34.5% of gaseous chlorine 36. Based on these experimental results, a modelling of the dry and wet deposits will be carried out considering the parameters related to the canopy and the atmospheric turbulence.

How to cite: Deo, S., Hebert, D., Benedetti, L., Vitorge, E., Lourino Cabana, B., Guillou, V., and Maro, D.: Dry deposition velocity of chlorine 36 on grassland, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-666, https://doi.org/10.5194/egusphere-egu22-666, 2022.

EGU22-1235 | Presentations | GI2.3

Modeling the depth dependence of Cs-137 concentration in Lake Onuma 

Yuko Hatano, Kentaro Akasaki, Eiichi Suetomi, Yukiko Okada, Kyuma Suzuki, and Shun Watanabe

Lake Onuma on Mt. Akagi (Gunma Prefecture, Japan) is a closed lake with an average water residence time of 2.3 years. The activity concentration of radioactive cesium in the lake was high shortly after the Fukushima accident. According to Suzuki et al. [1] and Watanabe [2], after a filtration process, Cs-137 are separated into two groups: particulate form and dissolved form. These two forms appears to have very different concentration profiles with each other,  when the Cs-137 concentration plotted against the sampled water depths. In the present study, we are going to model those behavior of particulate/dissolved forms with an emphasis on the depth dependency.

We consider a creation-annihilation process of plankton for the model of the particulate form, since diatom shells are found to be a major constituent of the particulate Cs-137 [2]. We set  ∂P/∂t = f(x,t)  and  f(x,t) = χ(x) cos(ωt) (0 ≤ x ≤ L(water column height), t > 0),  where P=P(x,t) is the activity concentration of the particulate form. The term f(x,t) is the rate of the net production of the plankton at a specific location x at a specific time t. Seasonal cycle is also taken into account by the cosine function (we neglect the phase shift here). The function χ(x), depends solely on water depth x, is responsible for dynamics or inhomogeneity of lake water, such as circulation, stratification or a thermocline. We assume that such a water structure relates to the production rate of plankton through the function χ(x). Thus, we may obtain the concentration of particulate Cs-137. For the dissolved concentration S(x,t), we use the classical diffusion equation with the diffusivity K being dependent on both space and time (i.e. K(x,t)), namely ∂S/∂t =  ∇•(K(x,t) ∇S). Here S=S(x,t) is the activity concentration of the dissolved form. The total activity concentration C(x,t) is the sum of P(x,t) and S(x,t). Using the pair of the equations, we can reproduce the followings. (1) depth profiles of each of the soluble- and particulate activity concentration and (2) depth profiles of the total Cs-137 concentration.

 [1] Suzuki, K. et al., Sci. Tot. Env. (2018)

 [2] Watanabe, S. et al.,  Proc. 20th Workshop on Environmental Radioactivity (2019)

How to cite: Hatano, Y., Akasaki, K., Suetomi, E., Okada, Y., Suzuki, K., and Watanabe, S.: Modeling the depth dependence of Cs-137 concentration in Lake Onuma, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1235, https://doi.org/10.5194/egusphere-egu22-1235, 2022.

EGU22-3340 | Presentations | GI2.3

Factors controlling the dissolved 137Cs seasonal fluctuations in the Abukuma River under the influence of the Fukushima Nuclear Power Plant accident 

Yasunori Igarashi, Nanb Kenji, Toshihiro Wada, Yoshifumi Wakiyama, Yuichi Onda, and Shota Moritaka

The 2011 Fukushima Daiichi Nuclear Power Plant (FDNPP) accident released large amounts of radioactive materials into the environment. River systems play an important role in the terrestrial redistribution of FDNPP-derived 137Cs in association with water and sediment movement. We examined the seasonal fluctuations in dissolved and particulate 137Cs activity concentrations and clarified the biological and physicochemical factors controlling 137Cs in the Abukuma River’s middle course in the region affected by the FDNPP accident. The results showed the water temperature and K+ concentration dominated the seasonality of the dissolved 137Cs activity concentration. We concluded that the 137Cs in organic matter is not a source of dissolved 137Cs in river water. The study also revealed the temperature dependence of Kd in riverine environments from a Van ’t Hoff equation. The standard reaction enthalpy of 137Cs in the Abukuma River was calculated to be approximately −19.3 kJ/mol. This was the first study to clearly reveal the mechanisms by which the dissolved 137Cs activity concentration and Kd are influenced by chemical and thermodynamic processes in the middle course of a large river, and it is expected to lead to an improved model of 137Cs dynamics in rivers.

How to cite: Igarashi, Y., Kenji, N., Wada, T., Wakiyama, Y., Onda, Y., and Moritaka, S.: Factors controlling the dissolved 137Cs seasonal fluctuations in the Abukuma River under the influence of the Fukushima Nuclear Power Plant accident, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3340, https://doi.org/10.5194/egusphere-egu22-3340, 2022.

EGU22-3442 | Presentations | GI2.3

A comparative study of riverine 137Cs dynamics during high-flow events at three contaminated river catchments in Fukushima 

Yoshifumi Wakiyama, Takuya Niida, Hyoe Takata, Keisuke Taniguchi, Honoka Kurosawa, Kazuki Fujita, and Alexei Konoplev

This study presents the temporal variations in riverine 137Cs concentrations and fluxes to the ocean during high-flow events in three coastal river catchments contaminated by the Fukushima Daiichi Nuclear Power Plant accident. River water samples were collected at points downstream in the Niida, Ukedo, and Takase Rivers during three high-flow events that occurred in 2019–2020. Variations in both the dissolved 137Cs concentration and 137Cs concentration in suspended solids appeared to reflect the spatial pattern of the 137Cs inventory in the catchments, rather than variations in physico-chemical properties. Negative relationships between the 137Cs concentration and δ15N in suspended sediment were found in all rivers during the intense rainfall events, suggesting an increased contribution of sediment from forested areas to the elevated 137Cs concentration. The 137Cs flux ranged from 0.33 to 18 GBq, depending on the rainfall erosivity. The particulate 137Cs fluxes from the Ukedo River were relatively low compared with the other two rivers and were attributed to the effect of the Ogaki Dam reservoir upstream. The ratio of 137Cs desorbed in seawater to 137Cs in suspended solids ranged from 2.8% to 6.6% and tended to be higher with a higher fraction of exchangeable 137Cs. The estimated potential release of 137Cs from suspended solids to the ocean was 0.048–0.57 GBq, or 0.8–6.2 times higher than the direct flux of dissolved 137Cs from the river. Episodic sampling during high-flow events demonstrated that the particulate 137Cs flux depends on catchment characteristics and controls 137Cs transfer to the ocean. 

How to cite: Wakiyama, Y., Niida, T., Takata, H., Taniguchi, K., Kurosawa, H., Fujita, K., and Konoplev, A.: A comparative study of riverine 137Cs dynamics during high-flow events at three contaminated river catchments in Fukushima, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3442, https://doi.org/10.5194/egusphere-egu22-3442, 2022.

EGU22-5397 | Presentations | GI2.3

Integrating measurement representativeness and release temporal variability to improve the Fukushima-Daiichi 137Cs source reconstruction 

Joffrey Dumont Le Brazidec, Marc Bocquet, Olivier Saunier, and Yelva Roustan

    The Fukushima-Daiichi accident involved massive and complex releases of radionuclides in the atmosphere. The releases assessment is a key issue and can be achieved by advanced inverse modelling techniques combined with a relevant dataset of measurements. A Bayesian inversion is particularly suitable to deal with this case. Indeed, it allows for rigorous statistical modelling and enables easy incorporation of informations of different natures into the reconstruction of the source and the associated uncertainties.
    We propose several methods to better quantify the Fukushima-Daiichi 137Cs source and the associated uncertainties. Firstly, we implement the Reversible-Jump MCMC algorithm, a sampling technique able to reconstruct the distributions of the 137Cs source magnitude together with its temporal discretisation. Secondly, we develop methods to (i) mix both air concentration and deposition measurements, and to (ii) take into account the spatial and temporal information from the air concentration measurements in the error covariance matrix determination.
    Using these methods, we obtain distributions of hourly 137Cs release rates from 11 to 24 March and assess the performance of our techniques by carrying out a model-to-data comparison. Furthermore, we demonstrate that this comparison is very sensitive to the statistical modelling of the inverse problem.

How to cite: Dumont Le Brazidec, J., Bocquet, M., Saunier, O., and Roustan, Y.: Integrating measurement representativeness and release temporal variability to improve the Fukushima-Daiichi 137Cs source reconstruction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5397, https://doi.org/10.5194/egusphere-egu22-5397, 2022.

EGU22-6698 | Presentations | GI2.3

Vertical distribution of 137Cs in bottom sediments as representing the time changes of water contamination: Chernobyl and Fukushima 

Aleksei Konoplev, Yoshifumi Wakiyama, Toshihiro Wada, Yasunori Igarashi, Gennady Laptev, Valentin Golosov, Maxim Ivanov, Mikhail Komissarov, and Kenji Nanba

Bottom sediments of lakes and dam reservoirs can provide an insight into understanding the dynamics of 137Cs strongly bound to sediment particles. On this premise, a number of cores of bottom sediments were collected in deep parts of lakes Glubokoe, Azbuchin, and Cooling Pond in close vicinity of the Chernobyl NPP in Ukraine, in Schekino reservoir (Upa River) in the Tula region of Russia (2018) and in Ogaki reservoir (Ukedo River) in Fukushima contaminated area (2019). Each layer of bottom sediments can be attributed to a certain time of suspended particles sedimentation. With 137Cs activity concentration in a given layer of bottom sediments corresponding to 137Cs concentration on suspended matter at that point in time, we were able to reconstruct the post-accidental dynamics of particulate 137Cs activity concentrations. Using experimental values of the distribution coefficient Kd, changes in the dissolved 137Cs activity concentrations were estimated. The annual mean particulate and dissolved 137Cs wash-off ratios were also calculated for the period after the accidents. Interestingly, the particulate 137Cs wash-off ratios for the Ukedo River at Ogaki dam were found to be similar to those for the Pripyat River at Chernobyl in the same time period after the accident, while the dissolved 137Cs wash-off ratios in the Ukedo River were an order of magnitude lower than the corresponding values in the Pripyat River. The estimates of particulate and dissolved 137Cs concentrations in Chernobyl cases were in reasonable agreement with monitoring data and predictions using the semi-empirical diffusional model. However, both the particulate and dissolved 137Cs activity concentrations and wash-off ratios in the Ukedo River declined faster during the first eight years after the FDNPP accident than predicted by the diffusional model, most likely, due to greater natural attenuation and, to some extent, remediation measures implemented on the catchments in Fukushima.

This research was supported by Science and Technology Research Partnership for Sustainable Development (SATREPS), Japan Science and Technology Agency (JST)/Japan International Cooperation Agency (JICA) (JPMJSA1603), by bilateral project No. 18-55-50002 of Russian Foundation for Basic Research (RFBR) and Japan Society for the Promotion of Science (JSPS), and JSPS Project KAKENHI (B) 18H03389.

How to cite: Konoplev, A., Wakiyama, Y., Wada, T., Igarashi, Y., Laptev, G., Golosov, V., Ivanov, M., Komissarov, M., and Nanba, K.: Vertical distribution of 137Cs in bottom sediments as representing the time changes of water contamination: Chernobyl and Fukushima, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6698, https://doi.org/10.5194/egusphere-egu22-6698, 2022.

EGU22-7068 | Presentations | GI2.3

Seasonal variation of dissolved Cs-137 concentrations in headwater catchments in Yamakiya district, Fukushima Prefecture 

Taichi Kawano, Yuichi Onda, Junko Takahishi, Fumiaki Makino, and Sho Iwagami

The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident occurred on March 11, 2011, and a large amount of Cs-137 was released into the environment. It is important to clarify the behavior of radioactive cesium-137 in headwater catchments because most of the Cs-137 falls and is deposited in forest areas and is transported in the environment through river systems.

The purpose of this study was to clarify the influence of water quality composition and organic matter on the seasonal variation of dissolved Cs-137 concentrations in stream water based on long-term monitoring since 2011 at four headwaters catchments in Yamakiya district, Fukushima Prefecture (Iboishiyama, Ishidairayama, Koutaishiyama, Setohachiyama), located about 35 km northwest of FDNPP.

Water temperature, pH, and EC were measured in the field, and SS and coarse organic matter were collected using a time-integrated SS (suspended sediments) sampler and organic matter net. The Cs-137 concentrations was measured in the laboratory using a germanium detector. Concentrations of cations (Na⁺,K⁺,Ca²⁺,Mg²⁺,NH₄⁺) and anions (Cl⁻,SO₄²⁻,NO₃⁻,NO₂⁻,PO₄²⁻) were measured by ion chromatography after 0.45μm filtration. In addition, dissolved organic carbon (DOC) concentrations was measured using a total organic carbon analyzer.

The results showed that K⁺, which is highly competitive with Cs-137, was detected at Iboisiyama, Ishidairayama, and Koutaishiyama, while NH₄⁺ was only detected in some samples at Iboishiyama. There was no obvious relationship between dissolved ion concentration and water temperature, and between dissolved ion concentration and dissolved ¹³⁷Cs concentration at all sites. However, a positive correlation between dissolved cesium concentration and water temperature and DOC and water temperature was observed at all sites regardless of the presence of K⁺ and NH₄⁺. On the other hand, there was no clear relationship between the cesium concentrations in SS and organic matter and water temperature. These results suggest that the seasonal variation in dissolved Cs-137 concentrations in stream water with water temperature could be caused by the seasonality of microbial decomposition of organic matter.

How to cite: Kawano, T., Onda, Y., Takahishi, J., Makino, F., and Iwagami, S.: Seasonal variation of dissolved Cs-137 concentrations in headwater catchments in Yamakiya district, Fukushima Prefecture, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7068, https://doi.org/10.5194/egusphere-egu22-7068, 2022.

A study of 137Cs distribution in a landscape cross-section characterizing the ELGS system (top-slope-closing depression) in the “Vyshkov-2” test site located in the Chernobyl abandoned zone, the Bryansk region, Russia, has been performed in 2015 and 2021. The test site (70×100 m) is located on the Iput’ river terrace in a pine forest characterized by the undisturbed soil-plant cover. Sod-podzolic sandy illuvial-ferruginous soils present the soil cover. The initial level of 137Cs contamination of the area varied from 1480 kBq/m2 to 1850 kBq/m2. Up to now, 89-99 % of the total 137Cs is fixed in the upper 20 cm soil layer with 70-96 % in the upper 8 cm. It allows field spectrometry data to study the structure of the 137Cs contamination field. The 137Cs activity was measured in the soil and moss cover along cross-sections with 1 m step by adapted gamma-spectrometer Violinist-III (USA). Cs-137 content in the soil cores’ and plant samples was determined in the laboratory by Canberra gamma-spectrometer with HPGe detector. It was shown that there is no unidirectional movement of 137Cs both in the soil and in the vegetation cover of the ELGS from the top to the closing depression. On the contrary, the data obtained allow us to state a pronounced cyclical variation of the 137Cs activity in ELGS, which can be traced in the soil and the vegetation. The variation appeared to be rather stable in space 29 and 35 years after the primary pollution. Cyclic fluctuation (variation) of 137Cs activity was described mathematically using Fourier-analysis, which was used to model the observed changes by the revealed three main harmonics. High and significant correlation coefficients obtained between the variation of 137Cs activity and the model for the soil-vegetation cover (r0,01= 0,868; n=17 - 2015; r0,01= 0,675; n=17 - 2021), soils (r0,01= 0,503-0,859; n=17) and moss samples (r0,01= 0,883; n=17 - 2015; r0,01= 0,678; n=17 - 2021) proved satisfactory fitting of models. The character of 137Cs variability in moss cover was generally similar to surface soil contamination, but the level of contamination and amplitude was specific.

The performed study confirmed specific features of 137Cs secondary migration in ELGS, which periodic functions describe. We infer that the observed cyclicity reflects elements’ migration in the ELGS system with water.

The reported study was funded by the Vernadsky Institute federal budget (research task #0137-2019-0006). The field works were supported partly by RFBR No 19-05-00816.

How to cite: Dolgushin, D. and Korobova, E.: Regularities of the 137Cs secondary distribution in the soil-moss cover of elementary landscape-geochemical systems and its dynamics within 6 years on the test site in the Chernobyl abandoned zone, Russia, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8178, https://doi.org/10.5194/egusphere-egu22-8178, 2022.

EGU22-9022 | Presentations | GI2.3

Ten-year long-range transport of radiocaesium in the surface layer in the Pacific Ocean and its marginal seas 

Michio Aoyama, Yuichiro Kumamoto, and Yayoi Inomata

Radiocaesium derived from the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident was observed across a wide area of the North Pacific, not only in surface seawater, but also in the ocean interior. In this presentation, we summarized the time scale of Lagrangian transport of the FNPP1 derived radiocaesium in surface water during the period from the time of the accident to March 2021 in the North Pacific and the Arctic Oceans and its marginal seas as shown below.

Initial observation results until December 2012 in the surface layer in the North Pacific Ocean by the global observations revealed that a typical feature within one year after the accident was a westward movement across the North Pacific Ocean, speed of which was reported at 7 km day-1 until August 2011. After that, the main body of FNPP1-derived radiocaesium moved east as 3 km day-1 and is separated from Japan in 2013. The arrival of the FNPP1 signal at the west coast of the American continent was reported in 2014. The elevation in the FNPP1 derived radiocaesium concentration in the Bering Sea in 2017 and in the Arctic Ocean in 2019 was reported. The northward bifurcation of the Kuroshio Extension made these obvious transport of the FNPP1 derived radiocaesium to the subarctic and arctic region while the transport by southward bifurcation was not observed. At Hawaii Islands in the subtropical gyre, there was no signal of the FNPP1 derived radiocaesium during the period from March 2011 and February 2017. At Yonaguni Island where the Kuroshio enters the East China Sea, the FNPP1 signal arrived at Yonaguni Islands eight years after the time of the accident, and these might be transported mainly from the subtropical gyre.

At the marginal seas of the North Pacific Ocean, the elevation in the FNPP1 derived radiocaesium concentration in the northern East China Sea in 2014, in the Sea of Japan in 2014/2015 were observed.

We also briefly summarize study results on nuclides other than radiocaesium (e.g., 90Sr, 239240Pu, and 129I).

How to cite: Aoyama, M., Kumamoto, Y., and Inomata, Y.: Ten-year long-range transport of radiocaesium in the surface layer in the Pacific Ocean and its marginal seas, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9022, https://doi.org/10.5194/egusphere-egu22-9022, 2022.

Radiocesium (137Cs) was one of the radioactive materials released from the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident in March 2011. Highly 137Cs contaminated water from groundwater to the sea was reduced after installation of the sea-side impermeable wall as a countermeasure against contaminated water in October 2015. As a result, 137Cs contamination in water from other sources became more prominent and the levels of 137Cs concentration in seawater was correlated with rainfall fluctuation. To determine the source of contamination, we estimated the fluctuation patterns of 137Cs concentration in seawater, groundwater level, and discharge from the channels using the Antecedent Precipitation Index (Rw) method.
The results indicated that the fluctuation in seawater collected near the 1-4 Units had strong agreement with the 3 day half-life of Rw. The half-life is shorter than that estimated by groundwater level (7 to 30 day). Therefore, the 137Cs concentration in seawater was influenced by relatively faster runoff than the deep groundwater flow. We also made the spatial distribution map of 137Cs concentration in seawater to determine the sources of contamination. It showed that the 137Cs contaminated area was the highest at “south- inside the intake of 1-4 Units” where the outlets of the K and BC discharge channels are located. In particular, the concentration of 137Cs in the channel K was found to correlate with the concentration of 137Cs in seawater near the 1-4 Units (average of R2 = 0.5). These results indicate that the concentration of 137Cs in seawater inside the FDNPP port can be estimated by the Rw method and that the source of the contamination could be determined using the half-life.

How to cite: Sato, H. and Onda, Y.: Determining sources of the 137Cs concentration in seawater at Fukushima Daiichi Nuclear Power Plant using Antecedent Precipitation Index, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9055, https://doi.org/10.5194/egusphere-egu22-9055, 2022.

European seas such as, Baltic, North, and Norwegian Seas are mostly affected areas by the accident at the Chernobyl nuclear power plant (CNPP) in 1986. Since Fukushima Daiichi nuclear power plant (FDNPP) is located on the coast of the North Pacific Ocean in east Japan, its accident resulted in the release of large amounts of radiocesium to the surrounding coastal marine environment (i.e. the waters off Fukushima and neighboring prefectures). The temporal change of radiocaesium concentration in seawater after both accidents was largely dependent on their submarine topography: The Baltic Sea is a semi-closed basin, while Norwegian and North Seas, and the waters off Fukushima and neighboring prefectures is directly connected to open-water. Although concentration of radioacesium (137Cs) in the surface water of the Baltic Sea (central part) continuously decreased, the values in 1996, ten years after the accident, were even higher than pre-accident level in 1985. On the other hand, in the waters off Fukushima and neighboring prefectures 137Cs concentrations in 2020, nine years after the accident, are approaching the pre-accident levels of 2010. The quick decrease is attributable to the intrusion or mixing of water masses with low 137Cs.

How to cite: Takata, H.: Temporal trends of radio-cesium concentration in the marine environment after the Chernobyl and Fukushima Dai-ichi Nuclear Power Plant accidents, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10644, https://doi.org/10.5194/egusphere-egu22-10644, 2022.

EGU22-10713 | Presentations | GI2.3 | Highlight

Decontamination and subsequent natural restoration processes impact on terrestrial systems in Niida River Catchment in Fukushima 

Yuichi Onda, Feng Bin, Yoshifumi Wakiyama, Keisuke Taniguchi, Asahi Hashimoto, and Yupan Zhang

For the Fukushima region in Japan, the large-scale decontamination in the catchments needed to require more attention because of their possible consequence in altering particulate Cs-137 flux from the terrestrial environment to the ocean. Here, combining the high-resolution satellite dataset and concurrent river monitoring results, we quantitively assess the impacts of land cover changes in large-area decontaminated regions on river suspended sediment (SS) and particulate Cs-137 dynamics during 2013-2018. We find that the decontaminated regions’ erodibility dramatically enhanced during the decontamination stage but rapidly declined in the subsequent natural-restoration stage. River SS dynamics show linear response to these land cover changes, where annual SS load (normalized by water discharge) at the end of decontamination increased by over 300% than pre-decontamination and decreased about 48% at the beginning of natural restoration. Fluctuations in particulate Cs-137 concentrations well reflect the process of sediment source alternation due to land cover changes in decontaminated regions. The “Fukushima decontamination experiment” can reveal the dramatic impact of decontamination-natural restoration processes, which highlights the need for quantitatively assessing human impacts on land use and resultant alternation in sediment transfer patterns in large scale catchments. 

How to cite: Onda, Y., Bin, F., Wakiyama, Y., Taniguchi, K., Hashimoto, A., and Zhang, Y.: Decontamination and subsequent natural restoration processes impact on terrestrial systems in Niida River Catchment in Fukushima, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10713, https://doi.org/10.5194/egusphere-egu22-10713, 2022.

EGU22-10817 | Presentations | GI2.3

Effects of stemflow on radiocesium infiltration into the forest soil 

Hiroaki Kato, Hikaru Iida, Tomoki Shinozuka, Yuma Niwano, and Yuichi Onda

Radiocesium deposited in the forest canopy is transferred to the forest floor by rainwater and litterfall. Among them, stemflow likely increases the radiocesium inventory by concentrating rainwater around the trunk. However, the effects of stemflow on the influx of radiocesium into forest soil have not been evaluated quantitatively. In this study, the fluxes of rainwater via stemflow, throughfall, and soil infiltration water were observed. The concentration of dissolved 137Cs was measured in a cedar forest in Fukushima Prefecture, Japan. Soil infiltration water was collected at 5 cm and 20 cm depths at the distant point from the tree trunk (Bt), and the base of the tree trunk (Rd), where the influence of stemflow was strong. The observations were conducted during the period from September 2019 to November 2021. During the observation period, an experiment was conducted to intercept the inflow of rainwater via the throughfall or stemflow, and the change in soil infiltration water was observed. The observation results showed that the infiltration flux of radiocesium into the forest soil was significantly higher at the Rd site and about three times larger than at the Bt site. Particularly at the 20 cm depth at the Rd site, the soil infiltration water flux increased with the stemflow. The stemflow exclusion resulted in the dcrease of radiocesium flux by about 70% at all depths at the Rd site. These results suggest that the stemflow increases the input of radiocesium to the base of the tree trunk and facilitates its transfer to the deeper soil layers.

How to cite: Kato, H., Iida, H., Shinozuka, T., Niwano, Y., and Onda, Y.: Effects of stemflow on radiocesium infiltration into the forest soil, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10817, https://doi.org/10.5194/egusphere-egu22-10817, 2022.

EGU22-11022 | Presentations | GI2.3

Estimation of 137Cs inventories in each ocean basin by a global ocean general circulation model for the global database interpolation 

Daisuke Tsumune, Frank Bryan, Keith Lindsay, Kazuhiro Misumi, Takaki Tsubono, and Michio Aoyama

Radioactive cesium (137Cs) is distributed in the global ocean due to global fallout by atmospheric nuclear weapons tests, releases from reprocessing plants in Europe, and supplied to the ocean by the Fukushima Daiichi Nuclear Power Plant (1F NPP) accident. In order to detect future contamination by radionuclides, it is necessary to understand the global distribution of radionuclides such as 137Cs. For this purpose, observed data have been summarized in a historical database (MARIS) by IAEA. The spatio-temporal density of the observations varies widely, therefore simulation by an ocean general circulation model (OGCM) can be helpful in the interpretation of these observations.

In order to clarify the behavior of 137Cs in the global ocean, OGSM simulations were conducted. Parallel Ocean Program version 2 (POP2) of the Community Earth System Model version 2 (CESM2) is employed. The horizontal resolution is 1.125 degree of longitude, and from 0.28 degree to 0.54 degree of latitude. There are 60 vertical levels with a minimum spacing of 10 m near the ocean surface, and increased spacing with depth to a maximum of 250 m. The simulated period was from 1945 to 2030 with the circulation forced by repeating (“Normal Year”) atmospheric conditions. As input sources of 137Cs to the model, global fallout from atmospheric nuclear tests, releases from reprocessing plants in Europe, and input from the 1F NPP accident were considered. It was assumed that the input conditions in 2020 would continue after 2020.

The simulated 137Cs activity agrees well with the observed data in the database, especially in the Atlantic and Pacific Oceans where the observation density is large. Since 137Cs undergoes radioactive decay with a half-life of 30 years, the inventory for each basin is the difference between the decay corrected cumulative input and flux. In the North Pacific, the inventory reached its maximum in 1966 due to the global fallout by atmospheric nuclear weapons tests. Fluxes from the North Pacific to the Indian Ocean, Arctic Ocean, and Central Pacific were positive, and the North Pacific was a source of supply for other ocean basins. The 1F NPP accident caused a 20% increase in the inventory in 2011. In the North Atlantic, the inventory reaches its maximum in the late 1970s, due to the releases from the reprocessing plant. The outflow flux from the North Atlantic to the Greenland Sea is larger than the other fluxes and is a source of supply to other ocean basins. After 2000, the inflow flux to the North Pacific from the Labrador Sea and the South Atlantic is larger than the outflow flux.

The time series of 137Cs inventory in each ocean basin and the fluxes among ocean basins were quantitatively analyzed by OGCM simulations, and the predictions for the next 10 years were made.  The 137Cs activity concentrations by global fallout can be detected in the global ocean after 2030. The OGCM simulations will be useful in planning future observations to fill the gaps in the database.

How to cite: Tsumune, D., Bryan, F., Lindsay, K., Misumi, K., Tsubono, T., and Aoyama, M.: Estimation of 137Cs inventories in each ocean basin by a global ocean general circulation model for the global database interpolation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11022, https://doi.org/10.5194/egusphere-egu22-11022, 2022.

EGU22-11502 | Presentations | GI2.3

Retrospective assessment of 14C aquatic and atmospheric releases from Ignalina Nuclear Power Plant due to exploitation of two RBMK-1500 type reactors 

Evaldas Maceika, Rūta Barisevičiūtė, Laurynas Juodis, Algirdas Pabedinskas, Žilvinas Ežerinskis, Justina Šapolaitė, Laurynas Butkus, and Vidmantas Remeikis

Considerable amounts of 14C in the nuclear reactor are generated by neutrons. It accumulates in reactor components, coolant, and cleaning systems, and partly is released into the environment as gaseous releases and as liquid effluents. Two RBMK-1500 type reactors were exploited at Ignalina NPP (Lithuania) 1983-2009. Releases from NPP radiocarbon accumulated in local biosphere by photosynthesis, including terrestrial and aquatic media, as INPP used Lake Drūkšiai as a cooling pond

Temporal variation of 14C in lake ecosystem was examined by analyzing measured radiocarbon concentration of the organic compounds (Alkali soluble-AS) and alkali insoluble-AIS) derived from the layers of the Drūkšiai lake bottom sediments. The lake sediment cores were sampled in 2013 and 2019, sliced to 1 cm layers and 14C concentration was measured of every layer. AS and AIS organic fractions of sediment samples were extracted by using the acid-base-acid method.

Tree ring cores were collected from Pinus Sylvestris pines around the Ignalina NPP site at different directions and distances. Cellulose extraction was performed with BABAB (base-acid-base-acid-bleach) procedure, and all samples were graphitized and measured by a single state accelerator mass spectrometry at Vilnius Radiocarbon facility. Tree rings 14C concentration analysis provides atmospheric radiocarbon concentration in locations around the nuclear object. This analysis provides an opportunity to evaluate the impact of a nuclear object on water and terrestrial ecosystems.

The results showed a pronounced increase of 14C above background up to 17.8 pMC in the tree rings during INPP exploitation as well during decommission (since 2010) periods. According to the recorded data in 2004-2017 of the local Ignalina NPP meteorological station, the prevailing wind direction was towards the North and East during warm and light time periods. The radiocarbon released from the INPP stack dilutes when it travels in a downwind direction from the INPP. However, even 6.6 km away from the INPP, the impact of the power plant is still clearly visible. By using our created Gaussian dispersion model, the estimated annual emissions of 14C activity from the Ignalina NPP to the air vary from year to year. When only the 1st INPP reactor Unit was operating in 1985-1987, average emissions were 1.2 TBq/year. Emissions almost doubled to 2.1 TBq/year in 1988, when the 2nd Unit became operational. Later, emission levels increased. It could be explained by the large amount of 14C accumulating in the graphite of the RBMK reactor and its gradual release.

14C concentration profile analysis of the lake bottom sediments core revealed a significant impact of the Ignalina NPP on the Drūkšiai lake ecosystem. An increase of 14C concentration in the layers of bottom sediments by 80 pMC in the AS fraction and only by 9 pMC in the AIS fraction was observed, corresponding to the period about years of 1998-2003. The maximum peak in AS of 189 pMC was reached approximately in 2001, followed by gradual lake recovery. This radiocarbon peak in the lake represents a large single one-time pollution release. The critical period was in 2000s when maintenance works of the reactors were performed.

How to cite: Maceika, E., Barisevičiūtė, R., Juodis, L., Pabedinskas, A., Ežerinskis, Ž., Šapolaitė, J., Butkus, L., and Remeikis, V.: Retrospective assessment of 14C aquatic and atmospheric releases from Ignalina Nuclear Power Plant due to exploitation of two RBMK-1500 type reactors, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11502, https://doi.org/10.5194/egusphere-egu22-11502, 2022.

EGU22-11571 | Presentations | GI2.3

Mapping of Post-Disaster Environments using 3D Backprojection and Iterative Inversion Methods Optimised for Limited-Pixel Gamma Spectrometers on Unoccupied Aerial Systems (UAS). 

Dean Connor, David Megson-Smith, Kieran Wood, Robbie Mackenzie, Euan Connolly, Sam White, Freddie Russell-Pavier, Matthew Ryan-Tucker, Peter Martin, Yannick Verbelen, Thomas Richardson, Nick Smith, and Thomas Scott

All radiological measurements acquired from airborne detectors suffer from the issues of geometrical signal dilution, signal attenuation and a complex interaction of the effective sampling area of the detector system with the 3D structure of the surrounding environment. Understanding and accounting for these variables is essential in recovering accurate dose rate maps that can help protect responding workforces in radiologically contaminated environments.

Two types of terrain-cognisant methods of improving source localisation and the contrast of airborne radiation maps are presented in this work, comprising of ‘Point Cloud Constrained 3D Backprojection’ and ‘Point Cloud Constrained Randomised Kaczmarz Inversion’. Each algorithm uses a combination of airborne gamma-spectrometry and 3D scene information collected by UAS platforms and have been applied to data collected with lightweight, simple (non-imaging) detector payloads at numerous locations across the Chornobyl Exclusion Zone (CEZ).

Common to both the algorithms is the projection of the photopeak intensity onto a point cloud representation of the environment, taking into account the position and orientation of the UAS in addition to the 3D response of the spectrometer. The 3D Backprojection method can be considered a relatively fast method of mapping of through proximity, in which the measured photopeak intensity is split over the point cloud according to the above factors. It is an additive technique, with each measurement increasing the overall magnitude of the radiation field assigned to the survey area, meaning that more measurements continues to increase the total radiation of the site. The total measured intensity of the solution is then normalised according to the time spent in proximity to each point in the scene, determined by splitting and projecting the nominal measurement time at each survey point over the point cloud according to the distance from the survey position. Thus accounting for sampling biases during the survey.

The inversion approach adapts algorithms routinely used in medical imaging for the unconstrained world in which the detector is no longer completely surrounding the subject/target. A forward projection model, based on the contribution of distant point sources to the detector intensity, is used to determine the relationship between the full set of measurements and the 3D scene. This results in a hypercube of linear equations where it is assumed every point in the scene contributes to the measured intensity. The algorithm randomly adds measurements from within the aerial set and back-projects this onto the point cloud, with the initial state of the solution set to emit no radiation. After a given number of iterations, the fit of the current solution to the original measurements is assessed though a least squares method and updated when this produces a fit better than the current best estimate. This continues to happen until a minimum value is reached before the divergence of the system, representing the most confident solution. Based on examples from both simulations and real world data, the improvement in contrast of airborne maps using this inversion method can make them equivalent to ground-based surveys, even when operating at 20 m AGL and above.

How to cite: Connor, D., Megson-Smith, D., Wood, K., Mackenzie, R., Connolly, E., White, S., Russell-Pavier, F., Ryan-Tucker, M., Martin, P., Verbelen, Y., Richardson, T., Smith, N., and Scott, T.: Mapping of Post-Disaster Environments using 3D Backprojection and Iterative Inversion Methods Optimised for Limited-Pixel Gamma Spectrometers on Unoccupied Aerial Systems (UAS)., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11571, https://doi.org/10.5194/egusphere-egu22-11571, 2022.

EGU22-11620 | Presentations | GI2.3

Methodology for estimating the emission of radionuclides into the atmosphere from wildfires in the Chernobyl Exclusion Zone 

Valentyn Protsak, Gennady Laptev, Oleg Voitsekhovych, Taras Hinchuk, and Kyrylo Korychenskyi

Most of the territory of the Chernobyl Exclusion Zone (CEZ) is covered by forest. Forest of CEZ have accumulated a significant part of the radioactive release and for many years have served as a barrier to the non spreading of the radionuclide contamination outside the CEZ.

According to the classification of wildfire danger, the forests of CEZ belong to high, above average and medium classes, making cases of wildfires as quite common.

Poor, sod-podzolic soils of Ukrainian Polesye contribute to the entry the activity of 90Sr and 137Cs in plant biomass. During wildfires some of the radionuclides contained in combustion products of biomass are emitted into the atmosphere. Biologically important radionuclides such as 90Sr, 137Cs, plutonium isotopes and 241Am bound to fine aerosols - combustion products - can be transported with atmospheric flows over the long range, causing secondary radioactive fallout and forming additional inhalation dose loads on the population.

Lack of the actual information on the source term (rate of emission of radionuclides) does not allow reliable modeling of the radiological impact of wildfires. To address this issue, we have proposed a methodology that allows for operational assessments of the dynamics of radionuclide emissions into the atmosphere from wildfires in the CEZ.

The basic parameters for the calculations are

  • cartographic data on the density of radionuclide contamination of the territory of the CEZ;
  • classification of the territory of the CEZ according to the distributive features of forests and meadows;
  • classification of CEZ forests according to taxa characteristics to estimate amount of stored fuel biomass (kg/m2);
  • experimental data on the transfer of radionuclides from soil to the main components of biomass for the calculation of radionuclide inventory in fuel biomass (Bq/m2). Thus, for meadows the main fuel component is grass turf, while for forest these are litter, wood, bark and pine needles.
  • experimental data on emission factors of radionuclides from fuel biomass.

Implementation of the proposed algorithm in the form of GIS application makes it possible to assess the dynamics of radionuclide emission into the atmosphere by delineation the fire areas on the CEZ map. The NASA WorldView interactive mapping web application can be used to estimate the temporal and spatial characteristics of the wildfire while it is being developed. The contouring of the area affected by fire is carried out according to the analysis of the cluster of thermal points. Also, operational contouring of wildfire can be carried out using data delivered from unmanned aerial vehicles.

The application of the proposed algorithm for the analysis of the dynamics of 137Cs emissions into the atmosphere from the April 2020 wildfire showed a good agreement with the data reported by various authors who used the method of inverse simulation. Improving the accuracy of calculations according to the proposed algorithm can be done by rectifying radionuclide emission factors and taking into account fire intensity data, which in turn can affect both the radionuclide emission factor and the degree of burnout of plant biomass.

How to cite: Protsak, V., Laptev, G., Voitsekhovych, O., Hinchuk, T., and Korychenskyi, K.: Methodology for estimating the emission of radionuclides into the atmosphere from wildfires in the Chernobyl Exclusion Zone, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11620, https://doi.org/10.5194/egusphere-egu22-11620, 2022.

Human activities such as mining and processing of naturally occurring radioactive materials have a potential to result in enhanced radioactivity levels in the environment. In South Africa, there has been extensive mining of gold and uranium which produced large mine tailings dams that are highly concentrated with radioactive elements. The purpose of this study was to carry out a preliminary survey on a large scale to assess the activity concentrations of 238U, 232Th and 40K in mine tailings, soils and outcropping rocks in the West Rand District in South Africa. This was done to better understand the impact of the abandoned mine tailings on the surrounding soil. This study employed in-situ gamma spectrometry technique to measure the activity concentrations of 238U, 232Th and 40K. The portable BGO SUPER-SPEC (RS-230) spectrometer, with a 6.3 cubic inches Bismuth Germanate Oxide (BGO) detector was used for in-situ measurements. In mine tailings the activity concentrations for 238U, 232Th and 40K were found to range from 209.95 to 2578.68 Bq/kg, 19.49 to 108.00 Bq/kg and 31.30 to 626.00 Bq/kg, respectively. In surface soil, the activity concentration of 238U for all measurements ranged between 12.35 and 941.07 Bq/kg, with an average value of 59.15 Bq/kg. 232Th levels ranged between 12.59 and 78.36 Bq/kg, with an average of 34.91 Bq/kg. For 40K the average activity concentration was found to be 245.64 Bq/kg, in a range of 31.30 - 1345.90 Bq/kg. For the rock samples analyzed, average activity concentrations were 32.97 Bq/kg, 32.26 Bq/kg and 351.52 Bg/kg for 238U, 232Th and 40K, respectively. The results indicate that higher radioactivity levels are found in mine tailings than in rocks and soils. 238U was found to contribute significantly to the overall activity concentration in tailings dams as compared to 232Th and 40K. It has been observed that the mine tailings have a potential to impact on the activity concentration of 238U in soil in the immediate vicinity. However, on a regional scale it was found that the radioactivity levels in surface soil mainly depend on the radioelement concentration of the underlying rocks. The contamination is only confined to areas where mine tailings materials are washed off and deposited on surface soils in close proximity to tailings sources. This serves as an indication that the migration of uranium from tailings dams is localized and occurs within short distances. It is recommended that further radiological monitoring be conducted in areas found to have elevated concentration of uranium-238.

Keywords-In-situ gamma-ray spectrometry, Mine tailings, Radioactivity, Soil.

How to cite: Moshupya, P., Abiye, T., and Korir, I.: In-situ measurements of natural radioactivity levels in the gold mine tailings dams of the West Rand District, South Africa., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11669, https://doi.org/10.5194/egusphere-egu22-11669, 2022.

EGU22-74 | Presentations | GI2.2

Experimental assessment of corrosion influence in reinforced concrete by GPR 

Salih Artagan, Vladislav Borecky, Özgür Yurdakul, and Miroslav Luňák

Corrosion is one of the most critical issues leading to damage in reinforced concrete structures. In most cases, the detection of corrosion damage is performed by visual inspection. Other techniques (drilling cores with petrography or chemical examination, potential measurements, and resistivity measurements) require minimum destruction since they can be utilized by reaching the reinforcement bar [1]. Recently, there has been an increasing trend to use Ground Penetrating Radar (GPR) as one of the emerging non-destructive testing (NDT) techniques in the diagnosis of corrosion [2].

This paper focuses on a series of GPR tests on specimens constructed from poor-quality concrete and plain round bar. These specimens were subjected to accelerated corrosion tests under laboratory conditions. The corrosion intensity of those specimens is non-destructively assessed with GPR, by collecting data before and after corrosion tests. For GPR tests, the IDS Aladdin system was used with a double polarized 2 GHz antenna. Based on GPR measurement, Relative Dielectric Permittivity (RDP) values of concrete, are calculated based on the known dimension of specimens and two-way travel time (twt) values obtained from A-scans. The change in RDP values of specimens before and after exposure to corrosion is then computed. Moreover, amplitude change and variation in frequency spectrum before and after corrosion exposure are analyzed.

The results of this experimental study thus indicate that corrosion damage in reinforced concrete can be determined by using several GPR signal attributes. More laboratory tests are required for better quantification of the impact of the corrosion phenomenon in reinforced concrete.

All GPR tests were conducted in Educational and Research Centre in Transport; Faculty of Transport Engineering; University of Pardubice. This work is supported by the University of Pardubice (Project No: CZ.02.2.69/0.0/0.0/18_053/0016969).

[1]        V. Sossa, V. Pérez-Gracia, R. González-Drigo, M. A. Rasol, Lab Non Destructive Test to Analyze the Effect of Corrosion on Ground Penetrating Radar Scans, Remote Sensing. 11 (2019) 2814. https://doi.org/10.3390/rs11232814.

[2]        K. Tešić, A. Baričević, M. Serdar, Non-Destructive Corrosion Inspection of Reinforced Concrete Using Ground-Penetrating Radar: A Review, Materials. 14 (2021) 975. https://doi.org/10.3390/ma14040975.

How to cite: Artagan, S., Borecky, V., Yurdakul, Ö., and Luňák, M.: Experimental assessment of corrosion influence in reinforced concrete by GPR, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-74, https://doi.org/10.5194/egusphere-egu22-74, 2022.

EGU22-1544 | Presentations | GI2.2

Dielectric Constant Estimation through Alpha Angle with a Polarimetric GPR System 

Lilong Zou, Fabio Tosti, and Amir M. Alani

As a recognised non-destructive testing (NDT) tool, Ground Penetrating Radar (GPR) is becoming increasingly common in the field of environmental engineering [1]-[3]. GPR uses electromagnetic (EM) waves which travel at specific velocity determined by the permittivity of the material. With the development of new GPR signal processing methodologies, finding information on the physical properties of hidden targets has become a key target. Currently, only three types of approach could be applied for the quantitative estimation of permittivity from GPR data, i.e., hyperbola curve fitting, common middle point (CMP) velocity analysis and full-waveform inversion. However, the main challenges for the estimation of permittivity from GPR backscattered signals are to provide effective and accurate strategy for prediction.

In this research, we used a dual-polarimetric GPR system to estimate the dielectric constant of targets. The system is equipped with two 2GHz antennas polarised perpendicularly each to one another (HH and VV). The dual polarisation enables deeper surveying, providing images of both shallow and deeper subsurface features. Polarimetry is a property of EM waves that generally refers to the orientation of the electric field vector, which plays here an important role as it allows either direct or parameterisation permittivity effects within the scattering problem in the remote sensing [4].

The aim of this research is to provide a novel and more robust approach for dielectric constant prediction using a dual-polarimetric GPR system. To this extent, the relationship between the relative permittivity and the polarimetric alpha angle have been investigated based on data collected by a GPR system with dual-polarised antennas. The approach was then assessed by laboratory experiments where different moisture sand targets (simulating the effect of different relative permittivity targets) were measured. After signal processing, a clear relationship between the alpha angle and the relative permittivity was obtained, proving the viability of the proposed method.

 

Acknowledgements

The authors would like to express their sincere thanks and gratitude to the following trusts, charities, organisations and individuals for their generosity in supporting this project: Lord Faringdon Charitable Trust, The Schroder Foundation, Cazenove Charitable Trust, Ernest Cook Trust, Sir Henry Keswick, Ian Bond, P. F. Charitable Trust, Prospect Investment Management Limited, The Adrian Swire Charitable Trust, The John Swire 1989 Charitable Trust, The Sackler Trust, The Tanlaw Foundation, and The Wyfold Charitable Trust.

 

References

[1] Zou, L. et al., 2020. Mapping and Assessment of Tree Roots using Ground Penetrating Radar with Low-Cost GPS. Remote Sensing, vol.12, no.8, pp:1300.

[2] Zou, L. et al., 2020. On the Use of Lateral Wave for the Interlayer Debonding Detecting in an Asphalt Airport Pavement Using a Multistatic GPR System. IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 6, pp. 4215-4224.

[3] Zou, L. et al., 2021. Study on Wavelet Entropy for Airport Pavement Debonded Layer Inspection by using a Multi-Static GPR System. Geophysics, vol. 86, no. 3, pp. WB69-WB78.

[4] J. Lee and E. Pottier, Polarimetric Imaging: From Basics to Applications, FL, Boca Raton: CRC Press, 2009.

How to cite: Zou, L., Tosti, F., and Alani, A. M.: Dielectric Constant Estimation through Alpha Angle with a Polarimetric GPR System, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1544, https://doi.org/10.5194/egusphere-egu22-1544, 2022.

EGU22-1849 | Presentations | GI2.2

On the use of Artificial Intelligence for classification of road pavements based on mechanical properties using ground-penetrating radar and deflection-based non-destructive testing data 

Fateme Dinmohammadi, Luca Bianchini Ciampoli, Fabio Tosti, Andrea Benedetto, and Amir M. Alani

Road pavements play a crucial role in the development of any construction as they provide safe surface on which vehicles can travel comfortably [1]. Pavements are multi-layered structures of processed and compacted materials in different thicknesses and in both unbound and bound forms with the function of supporting vehicle loads as well as providing a smooth riding quality. The condition of road pavement structures is susceptible to the impact of uncertain environmental factors and traffic loads, resulting in pavement deterioration over time. Therefore, the mechanical properties of pavements (such as strength, stiffness, etc.) need to be monitored on a regular basis to make sure that the pavement condition meets its prescribed threshold. The ground-penetrating radar (GPR) and deflection-based methods (e.g., the falling weight deflectometer (FWD)) are the most popular non-destructive testing (NDT) methods in pavement engineering science that are often used in combination to evaluate the damage and strength of pavements [2-4]. The layer thickness data from GPR scans are used as an input for deflection-based measurements to back-calculate the elastic moduli of the layers [2]. During the recent years, problems concerning the automatic interpretation of data from NDTs have received good attention and have simulated peer to peer interests in many industries like transportation. The use of Artificial Intelligence (AI) and Machine Learning (ML) techniques for the interpretation of NDT data can offer many advantages such as the improved speed and accuracy of analysis, especially for large-volume datasets. This study aims to train a dataset collected from GPR (2 GHz horn antenna) and the Curviameter deflection-based equipment using AI and ML algorithms to classify road flexible pavements based on their mechanical properties. Curviameter data are used as ground-truth measurements of pavement stiffness, whereas the GPR data provide geometric and physical attributes of the pavement structure. Several methods such as support vector machine (SVM), artificial neural network (ANN), and k nearest neighbours (KNN) are proposed and their performance in terms of accuracy of estimation of the strength and deformation properties of pavement layers are compared with each other as well as with the classical statistical methods. The results of this study can help road maintenance officials to identify and prioritise pavements at risk and make cost-effective and informed decisions for maintenance.

References

[1] Tosti, F., Bianchini Ciampoli, L., D’Amico, F. and Alani, A.M. (2019). Advances in the prediction of the bearing capacity of road flexible pavements using GPR. In: 10th International Workshop on Advanced GPR, European Association of Geoscientists & Engineers, pages 1-5.

[2] Plati, C., Loizos, A. & Gkyrtis, K. Assessment of Modern Roadways Using Non-destructive Geophysical Surveying Techniques. Surv Geophys 41, 395–430 (2020). 

[3] A. Benedetto, F. Tosti, Inferring bearing ratio of unbound materials from dielectric properties using GPR, in: Proceedings of the 2013 Airfield and Highway Pavement Conference: Sustainable and Efficient Pavements, June 2013, pp. 1336–1347.

[4] Tosti, F., Bianchini Ciampoli, L., D’Amico, F., Alani, A.M., Benedetto, A. (2018). An experimental-based model for the assessment of the mechanical properties of road pavements using GPR. Construction and Building Materials, Volume 165, pp. 966-974.

How to cite: Dinmohammadi, F., Bianchini Ciampoli, L., Tosti, F., Benedetto, A., and Alani, A. M.: On the use of Artificial Intelligence for classification of road pavements based on mechanical properties using ground-penetrating radar and deflection-based non-destructive testing data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1849, https://doi.org/10.5194/egusphere-egu22-1849, 2022.

EGU22-2166 | Presentations | GI2.2

Attenuation-compensated reverse-time migration of waterborne GPR based on attenuation coefficient estimation 

Ruiqing Shen, Yonghui Zhao, Hui Cheng, and Shuangcheng Ge

To the waterborne ground-penetrating radar detection, reverse-time migration (RTM) method can image the structure of the bottom of the water and locate the buried bodies. However, the image quality is limited by the attenuation of electromagnetic waves. How to compensate the attenuation becomes a critical problem. Some RTM methods related to the attenuation-compensated have been developed in recent years. We use the attenuation-compensated RTM based on the minus conductivity. However, the method is limited by the estimation of the attenuation coefficient. Here, we propose an attenuation-coefficient estimation method based on the centroid frequency downshift method (CFDS). In EM attenuation tomography, the centroid frequency downshift method works for attenuation estimation. Compared with the CFDS method in tomography, our proposal is based on the centroid frequency of the bottom-interface of water instead of the source wavelet. Thus, we can avoid the problem of the unknown source wavelet. The method is based on two assumptions: 1) GPR data can be regarded as zero-offset records. 2) Reflections from underwater interfaces are independent of frequency. In addition, the formula about the attenuation coefficient shows when the ratio between the conductivity and the product of the dielectric constant and the angular frequency is greater than one, the attenuation coefficient tends to be a constant. This does not meet the assumption that the attenuation coefficient is linearly related to frequency. We will select a proper frequency range to meet the linear relation by the spectral ratio method. Because the ratio of the signal spectrum of the bottom interface to the spectrum of the underwater interface is consistent with the change of the attenuation coefficient with frequency. Then, the CFDS method will acquire a linear attenuation coefficient with the frequency. Finally, we choose half of the central frequency to acquire the estimated attenuation coefficient. We design a layered waterborne GPR detection model, the conductivity of the silt layer varies between 0.1 and 0.01. The error of the conductivity estimation is below 10%. After acquiring the attenuation coefficient, the attenuation-compensated RTM works correctly and effectively.

How to cite: Shen, R., Zhao, Y., Cheng, H., and Ge, S.: Attenuation-compensated reverse-time migration of waterborne GPR based on attenuation coefficient estimation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2166, https://doi.org/10.5194/egusphere-egu22-2166, 2022.

EGU22-2253 | Presentations | GI2.2

An approach to integrate GPR thickness variability and roughness level into pavement performance evaluation 

Christina Plati, Andreas Loizos, and Konstantina Georgouli

It is a truism that pavements deteriorate due to the combined effects of traffic loads and environmental conditions. The manner or ability of a road to meet the demands of traffic and the environment and to provide at least an acceptable level of performance to road users throughout its life is referred to as pavement performance. An important indicator of pavement performance is ride quality. This is a rather subjective measure of performance that depends on (i) the physical properties of the pavement surface, (ii) the mechanical properties of the vehicle, and (iii) the acceptance of the perceived ride quality by road users. Due to the subjectivity of ride quality assessment, many researchers have worked in the past to develop an objective indicator of pavement quality. The International Roughness Index (IRI) is considered a good indicator of pavement performance in terms of road roughness. It was developed to be linear, transferable, and stable over time and is based on the concept of a true longitudinal profile. Following the identification and quantification of ride quality by the IRI, pavement activities include the systematic collection of roughness data in the form of the IRI using advanced laser profilers, either to "accept" an as-built pavement or to monitor and evaluate the functional condition of an in-service pavement.

On the other hand, pavement performance can vary significantly due to variations in layer thickness, primarily due to the construction process and quality control methods used. Even if a uniform design thickness is specified for a road section, the actual thickness may vary. It is expected that the layer thickness will have some probability distribution, with the highest density being around the target thickness. Information on layer thickness is usually obtained from as-built records, from coring or from Ground Penetrating Radar (GPR) surveys. GPR is a powerful measurement system that provides pavement thickness estimates with excellent data coverage at travel speeds. It can significantly improve pavement structure estimates compared to data from as-built plans. In addition, GPR surveys are fast, cost effective, and non-destructive compared to coring.

The present research developed a sensing approach that extends the capability of GPR beyond its ability to estimate pavement thickness. Specifically, the approach links GPR thickness to IRI based on the principle that a GPR system and a laser profiler are independent sensors that can be combined to provide a more complete image of pavement performance. To this end, field data collected by a GPR system and a laser profiler along highway sections are analyzed to evaluate pavement performance and predict future condition. The results show that thickness variations are related to roughness levels and specify the deterioration of the pavement throughout its lifetime.

How to cite: Plati, C., Loizos, A., and Georgouli, K.: An approach to integrate GPR thickness variability and roughness level into pavement performance evaluation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2253, https://doi.org/10.5194/egusphere-egu22-2253, 2022.

EGU22-2341 | Presentations | GI2.2 | Highlight

Monitoring of Bridges by Satellite Remote Sensing Using Multi-Source and Multi-Resolution Data Integration Techniques: a Case Study of the Rochester Bridge 

Valerio Gagliardi, Luca Bianchini Ciampoli, Fabrizio D’Amico, Maria Libera Battagliere, Sue Threader, Amir M. Alani, Andrea Benedetto, and Fabio Tosti

Monitoring of bridges and viaducts has become a priority for asset owners due to progressive infrastructure ageing and its impact on safety and management costs. Advancement in data processing and interpretation methods and the accessibility of Synthetic Aperture Radar (SAR) datasets from different satellite missions have contributed to raise interest for use of near-real-time bridge assessment methods. In this context, the Multi-temporal Interferometric Synthetic Aperture Radar (MT-InSAR) space-borne monitoring technique has proven to be effective for detection of cumulative surface displacements with a millimetre accuracy [1-3].

This research aims to investigate the viability of using satellite remote sensing for structural assessment of the Rochester Bridge in Rochester, Kent, UK. To this purpose, high-resolution SAR datasets are used as the reference information and complemented by additional data from different sensing technologies (e.g., medium-resolution SAR datasets and ground-based (GB) non-destructive testing (NDT)). In detail, high-resolution SAR products of the COSMO-SkyMed (CSK) mission (2017-2019) provided by the Italian Space Agency (ASI) in the framework of the Project “Motib - ID 742”, approved by ASI, are processed using a MT-InSAR approach.

The method allowed to identify several Persistent Scatterers (PSs) – which have been associated to different structural elements (e.g., the bridges piers) over the four main bridge decks – and monitor bridge displacements during the observation time. The outcomes of this study demonstrate that information from the use of high-resolution InSAR data can be successfully integrated to datasets of different resolution, scale and source technology. Compared to stand-alone technologies, a main advantage of the proposed approach is in the provision of a fully-comprehensive (i.e., surface and subsurface) and dense array of information with a larger spatial coverage and a higher time acquisition frequency. This results in a more effective identification and monitoring of decays at reduced costs, paving the way for implementation into next generation Bridge Management Systems (BMSs).

Acknowledgements: This research is supported by the Italian Ministry of Education, University and Research under the National Project “EXTRA TN”, PRIN2017, Prot. 20179BP4SM. Funding from MIUR, in the frame of the“Departments of Excellence Initiative 2018–2022”,attributed to the Department of Engineering of Roma Tre University, is acknowledged.Authors would also like to acknowledge the Rochester Bridge Trust for supporting research discussed in this paper. The COSMO-SkyMed (CSK) products - ©ASI- are provided by the Italian Space Agency (ASI) under a license to use in the framework of the Project “ASI Open-Call - Motib (ID 742)” approved by ASI.

References

[1] Gagliardi V., Bianchini Ciampoli L., D'Amico F., Alani A. M., Tosti F., Battagliere M. L., Benedetto A., “Bridge monitoring and assessment by high-resolution satellite remote sensing technologies”, Proc. SPIE 11525, SPIE Future Sensing Technologies. 2020. doi: 1117/12.2579700

[2] Jung, J.; Kim, D.-j.; Palanisamy Vadivel, S.K.; Yun, S.-H. "Long-Term Deflection Monitoring for Bridges Using X and C-Band Time-Series SAR Interferometry". Remote Sens. 2019

[3] Gagliardi V., Bianchini Ciampoli L., D'Amico F., Tosti F., Alani A. and Benedetto A. “A Novel Geo-Statistical Approach for Transport Infrastructure Network Monitoring by Persistent Scatterer Interferometry (PSI)”. In: 2020 IEEE Radar Conference, Florence, Italy, 2020, pp. 1-6, doi: 10.1109/RadarConf2043947.2020.9266336

How to cite: Gagliardi, V., Bianchini Ciampoli, L., D’Amico, F., Battagliere, M. L., Threader, S., Alani, A. M., Benedetto, A., and Tosti, F.: Monitoring of Bridges by Satellite Remote Sensing Using Multi-Source and Multi-Resolution Data Integration Techniques: a Case Study of the Rochester Bridge, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2341, https://doi.org/10.5194/egusphere-egu22-2341, 2022.

EGU22-2533 | Presentations | GI2.2

Monitoring of Airport Runways by Satellite-based Remote Sensing Techniques: a Geostatistical Analysis on Sentinel 1 SAR Data 

Valerio Gagliardi, Sebastiano Trevisani, Luca Bianchini Ciampoli, Fabrizio D’Amico, Amir M. Alani, Andrea Benedetto, and Fabio Tosti

Maintenance of airport runways is crucial to comply with strict safety requirements for airport operations and air traffic management [1]. Therefore, monitoring pavement surface defects and irregularities with a high temporal frequency, accuracy and spatial density of information becomes strategic in airport asset management [2-3]. In this context, Multi-Temporal Interferometric Synthetic Aperture Radar (MT-InSAR) techniques are gaining momentum in the assessment and health monitoring of infrastructure assets, proving their viability for the long-term evaluation of ground scatterers. However, the implementation of C-band SAR data as a routine tool in Airport Pavement Management Systems (APMSs) for the accurate measurement of differential displacements on runways is still an open challenge [4]. This research aims to demonstrate the viability of using medium-resolution (C-band) SAR products and their contribution to improve current maintenance strategies in case of localised foundation settlements in airport runways. To this purpose, Sentinel-1A SAR products, available through the European Space Agency (ESA) Copernicus Program, were acquired and processed to monitor displacements on “Runway n.3” of the “L. Da Vinci International Airport” in Fiumicino, Rome, Italy.A geostatistical study is performed for exploring the spatial data structure and for the interpolation of the Sentinel-1A SAR data in correspondence of ground control points.The analysis provided ample information on the spatial continuity of the Sentinel 1 data, also in comparison with the high-resolution COSMO-SkyMed and the ground-based topographic levelling data, taken as the benchmark.Furthermore, a comparison between the MT-InSAR outcomes from the Sentinel-1A SAR data, interpolated by means of Ordinary Kriging, and the ground-truth topographic levelling data demonstrated the accuracy of the Sentinel 1 data. Results support the effectiveness of using medium-resolution InSAR data as a continuous and long-term routine monitoring tool for millimetre-scale displacements in airport runways. Outcomes of this study can pave the way for the development of more efficient and sustainable maintenance strategies for inclusion in next-generation APMSs.  

Acknowledgments and fundings: The authors acknowledge the European Space Agency (ESA), for providing the Sentinel 1 SAR products for the development of this research. The COSMO-SkyMed Products—©ASI (Italian Space Agency)- are delivered by ASI under the license to use.This research falls within the National Project “EXTRA TN”, PRIN 2017, supported by MIUR. The authors acknowledge funding from the MIUR, in the frame of the “Departments of Excellence Initiative 2018–2022”, attributed to the Department of Engineering of Roma Tre University

 References

[1]Gagliardi V., Bianchini Ciampoli L., D'Amico F., Tosti F., Alani A. and Benedetto A. “A Novel Geo-Statistical Approach for Transport Infrastructure Network Monitoring by Persistent Scatterer Interferometry (PSI)”. In: 2020 IEEE Radar Conference, Florence, Italy, 2020, pp. 1-6

[2]Gagliardi V, Bianchini Ciampoli L, Trevisani S, D’Amico F, Alani AM, Benedetto A, Tosti F. "Testing Sentinel-1 SAR Interferometry Data for Airport Runway Monitoring: A Geostatistical Analysis". 2021; 21(17):5769. https://doi.org/10.3390/s21175769

[3]Gao, M.; Gong, H.; Chen, B.; Zhou, C.; Chen, W.; Liang, Y.; Shi, M.; Si, Y. "InSAR time-series investigation of long-term ground displacement at Beijing Capital International Airport, China". Tectonophysics 2016, 691, 271–281.

[4]Department of Transportation Federal Aviation Administration (FAA), Advisory Circular 150/5320-6F, Airport Pavement Design and Evaluation, 2016

How to cite: Gagliardi, V., Trevisani, S., Bianchini Ciampoli, L., D’Amico, F., Alani, A. M., Benedetto, A., and Tosti, F.: Monitoring of Airport Runways by Satellite-based Remote Sensing Techniques: a Geostatistical Analysis on Sentinel 1 SAR Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2533, https://doi.org/10.5194/egusphere-egu22-2533, 2022.

EGU22-2712 | Presentations | GI2.2

Quality assessment in railway ballast by integration of NDT methods and remote sensing techniques: a study case in Salerno, Southern Italy 

Luca Bianchini Ciampoli, Valerio Gagliardi, Fabrizio D'Amico, Chiara Clementini, Daniele Latini, and Andrea Benedetto

Maintenance and rehabilitation policies represent a task of paramount importance for managers and administrators of railway networks to maintain the highest standards of transport safety while limiting as much as possible the costs of maintenance operations.

To this effect, high-productivity survey methods become crucial as they allow for timely recognition of the quality of the asset elements, among which the ballast layers are the most likely to undergo rapid deterioration processes. Particularly, Ground Penetrating Radar (GPR) has received positive feedback from researchers and professionals due to the capability of detecting signs of deterioration within ballasted trackbeds that are not recognizable by a visual inspection at the surface, through high-productivity surveys. On the other hand, satellite-based surveys are nowadays being increasingly applied to the monitoring of transport assets. Techniques such as Multi-temporal Interferometric Synthetic Aperture Radar (MT-InSAR) allows evaluating potential deformations suffered by railway sections and their surroundings by analyzing phase changes between multiple images of the same area acquired at progressive times. 

For both of these techniques, despite the wide recognition by the field-related scientific literature, survey protocols and data processing standards for the detection and classification of the quality of ballast layers are still missing. In addition, procedures of integration and data fusion between GPR and InSAR datasets are still very rare.

The present study aims at demonstrating the viability of the integration between these two survey methodologies for a more comprehensive assessment of the condition of ballasted track-beds over a railway stretch. Particularly, a traditional railway section going from Cava de’ Tirreni to Salerno, Campania (Italy), was subject to both GPR and MT-InSAR inspections. An ad hoc experimental setup was realized to fix horn antennas with different central frequencies to an actual inspection convoy that surveyed the railway stretch in both the travel directions. Time-frequency methods were applied to the data to detect subsections of the railway affected by the poor quality of ballast (i.e. high rate of fouling). In parallel, a two-years MT-InSAR analysis was conducted to evaluate possible deformations that occurred to the railway line in the period before the GPR test. In addition, results from both the analyses were compared to the reports from visual inspections as provided by the railway manager.

The results of the surveys confirm the high potential of GPR in detecting the fouling condition of the ballast layers at various stages of severity. The integration of this information to the outcomes of InSAR analysis allows for identifying whether the deterioration of the track-beds is related to poorly bearing subgrades or rather to excessive stresses between the aggregates resulting in their fragmentation.

Acknowledgments

This research is supported by the Italian Ministry of Education, University, and Research under the National Project “EXTRA TN”, PRIN2017, Prot. 20179BP4SM. Funding from MIUR, in the frame of the“Departments of Excellence Initiative 2018–2022”, attributed to the Department of Engineering of Roma Tre University, is acknowledged. The authors would also like to express their gratitude to RFI S.p.a. in the person of Eng. Pasquale Ferraro for the valuable support to the tests.

How to cite: Bianchini Ciampoli, L., Gagliardi, V., D'Amico, F., Clementini, C., Latini, D., and Benedetto, A.: Quality assessment in railway ballast by integration of NDT methods and remote sensing techniques: a study case in Salerno, Southern Italy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2712, https://doi.org/10.5194/egusphere-egu22-2712, 2022.

Detecting decay in tree trunks is essential in considering tree health and safety. Continual monitoring of tree trunks is possible using a digital model, which can contain incremental assessment data on tree health. Researchers have previously employed non-destructive techniques, for instance, laser scanning, acoustics, and Ground Penetrating Radar (GPR) to study both the external and internal physical dimensions of objects and structures [1], including tree trunks [2]. Light Detection and Ranging (LiDAR) technology is also continually employed in infrastructure and asset management to generate models and to detect surface displacements with millimeter accuracy [3]. Nevertheless, the scanning of structures using these existing state-of-the-art technologies can be time consuming, technical, and expensive.

This work investigates the design and implementation of a smartphone app for scanning tree trunks to generate a 3D digital model for later visualization and assessment. The app uses LiDAR technology, which has recently become available in smart devices, for instance, the Apple iPhone 12+ and the iPad Pro. With the prevalence of internet-of-things (IoT) sensors, digital twins are being increasingly used in a variety of industries, for example, architecture and manufacturing. A digital twin is a digital representation of an existing physical object or structure. With our app, a digital twin of a tree can be developed and maintained by continually updating data on its dimensions and internal state of decay. Further, we can situate and visualize tree trunks as digital objects in the real world using augmented reality, which is also possible in modern smart devices. We previously investigated tree trunks using GPR [2] to generate tomographic maps, to denote level of decay. We aim to adopt a data integration and fusion approach, using such existing (and incremental GPR data) and an external LiDAR scan to gain a full 3D ‘picture’ of tree trunks.

We intend to validate our app against state-of-the-art techniques, i.e., laser scanning and photogrammetry. With the ability to scan tree trunks within reasonable parameters of accuracy, the app can provide a relatively low-cost environmental modelling and assessment solution for researchers and experts.

 

Acknowledgments: Sincere thanks to the following for their support: Lord Faringdon Charitable Trust, The Schroder Foundation, Cazenove Charitable Trust, Ernest Cook Trust, Sir Henry Keswick, Ian Bond, P. F. Charitable Trust, Prospect Investment Management Limited, The Adrian Swire Charitable Trust, The John Swire 1989 Charitable Trust, The Sackler Trust, The Tanlaw Foundation, and The Wyfold Charitable Trust.

 

References

[1] Alani A. et al., Non-destructive assessment of a historic masonry arch bridge using ground penetrating radar and 3D laser scanner. IMEKO International Conference on Metrology for Archaeology and Cultural Heritage Lecce, Italy, October 23-25, 2017.

[2] Tosti et al., "The Use of GPR and Microwave Tomography for the Assessment of the Internal Structure of Hollow Trees," in IEEE Transactions on Geoscience and Remote Sensing, Doi: 10.1109/TGRS.2021.3115408.

[3] Lee, J et al., Long-term displacement measurement of bridges using a LiDAR system. Struct Control Health Monit. 2019; 26:e2428.

How to cite: Uzor, S., Tosti, F., and Alani, A. M.: Low-cost scanning of tree trunks for analysis and visualization in augmented reality using smartphone LiDAR and digital twins, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3247, https://doi.org/10.5194/egusphere-egu22-3247, 2022.

The need to monitor and evaluate the impact of natural phenomena on structures, infrastructures, as well as on the natural environment, in recent years, plays a role of considerable importance for society also due to the continuous occurrence of "catastrophic events" which recently faster change our Planet.

Innovation and research have allowed a profound change in the data acquisition and acquisitions methodology coming to develop increasingly complex and innovative technologies. From an application point of view, remote sensing gives the possibility to easily manage the layer information which is indispensable for the best characterization of the environment from a numerical and a chemical-physical point of view.

NeMeA Sistemi srl, observant to the environment and its protection for years, began to study it using RADAR / SAR (Synthetic Aperture RADAR) data thanks to the opportunity to use in the best way the COSMO-SkyMed data through the tender Open Call for SMEs (Small and Medium Enterprises) of the Italian Space Agency in 2015.

Since then, NeMeA Sistemi srl has started a highly focused and innovative training that led us to observe the Earth in a new way. The path undertaken in NeMeA Sistemi srl is constantly growing and allowed us to know the RADAR / SAR data and the enormous potential.

The COSMO-SkyMed data provided is treated, processed and transformed by providing various information, allows you to identify changes, classify objects and artifacts measuring them.

In this context, NeMeA Sistemi srl in 2016 proposed a first project for the monitoring of illegal buildings in the Municipality of Ventimiglia (Liguria), with positive results. In this context, the final product was obtained with classic standard classification techniques of the SAR data.

 Following this positive experience, NeMeA Sistemi srl applied also to the regional call issued by Sardegna Ricerche for the Sardinia Region where the source of funding is the European Regional Development Fund (ERDF) 2014-2020.

The SardOS project (Sardinia Observed from Space), proposed by NeMeA Sistemi srl, aims to monitor and safeguard environmental and anthropogenic health in the territory of 4 Sardinian municipalities (Alghero, Capoterra, Quartu and Arzachena), also identifying the coast profiles, the evolutionary trend of sediments in the riverbed and buildings not present in the land registry. For environmental monitoring purposes, COSMO-SkyMed data are exploited and combined with bathymetric measurements acquired using the Hydra aquatic drone owned by NeMeA Sistemi srl. SAR data were processed using innovative specific territorial analysis algorithms in urban environment.

After these successful cases studies, which allowed the development of new services for the territorial monitoring and control, NeMeA Sistemi srl is working on a new project, 3xA (Creation of Machine Learning and Deep Learning algorithms dedicated to pattern recognition in SAR data). By exploiting Artificial Intelligence, the implemented algorithms use innovative unsupervised techniques to identify any changes.

The objective of this document is to provide an overview of the experience gained in NeMeA Sistemi srl, the value-added products and innovative services developed in the company aimed at environmental monitoring, the prevention of dangers and natural risks.

How to cite: Pennino, I.: A strategy of territorial control: from the standard comparison techniques to the Advanced Unsupervised Deep Learning Change Detection in high resolution SAR images, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3799, https://doi.org/10.5194/egusphere-egu22-3799, 2022.

EGU22-4437 | Presentations | GI2.2

Rebar corrosion monitoring with a multisensor non-destructive geophysical techniques. 

Enzo Rizzo, Giacomo Fornasari, Luigi Capozzoli, Gregory De Martino, and Valeria Giampaolo

Rebar Corrosion is one of the main causes of deterioration of engineering reinforced structure. This degradation reduces the service life and durability of the structures. Such degradation can result in the collapse of engineering structures. When the first cracks are noticed on the concrete surface, corrosion has generally reached an advanced stage and maintenance action is required. The early detection of rebar corrosion of bridges, tunnel, buildings and other civil engineering structures is important to reduce the expensive cost to repair the deteriorated structure. Several techniques have been developed for understanding the mechanism and kinetics of the corrosion of rebar, but the paper defines the interest of combining several NDT for field inspection to overcome the limitation of measuring instantaneous corrosion rates and to improve the estimation of the service life of RC structures. Non-destructive testing and evaluation of the rebar corrosion is a major issue for predicting the service life of reinforced concrete structures.

This paper introduces a laboratory test, that was performed at Geophysical Laboratory of Ferrara University. The test consisted in a multisensor application concerning rebar corrosion monitoring using different geophysical methods on a concrete sample of about 50 x 30 cm with one steel rebar of 10 mm diameter. An accelerating reinforcement bar corrosion using direct current (DC) power supply with 5% sodium chloride (NaCl) solution was used to induce rebar corrosion. The 2GHz GPR antenna by IDS, the ERT with Abem Terrameter and Self-Potential with Keithley multivoltmeter at high impedance were used for rebar corrosion monitoring. A multisensor approach should reduce the errors resulting from measurements, and improve synergistically the estimation of service life of the RC.

Each technique provided specific information, but a data integration method used in the operating system will further improve the overall quality of diagnosis. The collected data were used for an integration approach to obtain an evolution of the phenomenon of corrosion of the reinforcement bar. All the three methods were able to detect the physical parameter variation during the corrosion phenomena, but more attention is necessary on natural corrosion, that is a slow process and the properties of the experimental steel–concrete interface may not be representative of natural corrosion. However, each of these geophysical methods possesses certain advantages and limitations, therefore a combination of these geophysical techniques, with an multisensor approach is recommended to use to obtain the corrosion condition of steel and the condition of concrete cover.  Moreover, extrapolating laboratory results performed with a single rebar to a large structure with interconnected rebars thus remains challenging. Therefore, during the next experiments, special care must be taken regarding the design and preparation of the samples to obtain meaningful information for field application.

How to cite: Rizzo, E., Fornasari, G., Capozzoli, L., De Martino, G., and Giampaolo, V.: Rebar corrosion monitoring with a multisensor non-destructive geophysical techniques., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4437, https://doi.org/10.5194/egusphere-egu22-4437, 2022.

EGU22-4826 | Presentations | GI2.2

A 24 GHz MIMO radar for the autonomous navigation of unmanned surface vehicles 

Giovanni Ludeno, Gianluca Gennarelli, Carlo Noviello, Giuseppe Esposito, Ilaria Catapano, and Francesco Soldovieri

In the last years, unmanned surface vehicles (USVs) in marine environment have attracted considerable interest since they are flexible observation platforms suitable to operate in remote areas on demand. Accordingly, their usage has been proposed in several contexts such as research activities, military operations, environmental monitoring and oil exploration [1]. However, most of current USV remote control techniques are based on human-assisted technology thus a fully autonomous USV system is still an open issue [2].

The safety of the vehicle and the ability to complete the mission depends crucially on the capability of detecting objects on the sea surface, which is necessary for collision avoidance. Anti-collision systems for USVs typically require measurements collected from multiple sensors (e.g. Lidar, cameras, etc.), where each sensor has its own advantages and disadvantages in terms of resolution, field of view (FoV), operative range and so on [3].

Among the available sensing technologies, radar is capable of operating regardless of weather and visibility conditions, has moderate costs and can be easily adapted to operate within the marine environment. Furthermore, radar is characterized by an excellent coverage and high resolution along the range coordinate and it is also able to guarantee a 360° FoV in the horizontal plane.

Nautical radars are the most popular solutions to detect floating targets on the sea surface; however, they are bulky and not always effective in detecting small objects located very close to the radar.

This contribution investigates the applicability of a compact and lightweight 24 GHz multiple-input multiple-output (MIMO) radar originally developed for automotive applications to localize floating targets at short ranges (from tens to few hundreds of meters). In this frame, we propose an ad-hoc signal processing strategy combining MIMO technology, detection, and tracking algorithms to achieve target localization and tracking in a real-time mode. A validation of the proposed signal processing chain is firstly performed thanks to numerical simulations. After, preliminary field tests carried out in the marine environment are presented to assess the performance of the radar prototype and of the related signal processing.

 

References

  • [1] Zhixiang et al. "Unmanned surface vehicles: An overview of developments and challenges", Annual Reviews in Control, vol. 41, pp. 71-93, 2016
  • [2] Caccia, M. Bibuli, R. Bono, G. Bruzzone, “Basic navigation, guidance and control of an unmanned surface vehicle”, Autonomous Robots, vol. 25, no. 4, pp. 349-365, 2008
  • [3] Robinette, M. Sacarny, M. DeFilippo, M. Novitzky, M. R. Benjamin, “Sensor evaluation for autonomous surface vehicles in inland waterways”, Proc. IEEE OCEANS 2019, pp. 1-8, 2019.

How to cite: Ludeno, G., Gennarelli, G., Noviello, C., Esposito, G., Catapano, I., and Soldovieri, F.: A 24 GHz MIMO radar for the autonomous navigation of unmanned surface vehicles, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4826, https://doi.org/10.5194/egusphere-egu22-4826, 2022.

EGU22-4912 | Presentations | GI2.2

Multiples suppression scheme of waterborne GPR data 

Yonghui Zhao, Ruiqing Shen, and Hui Cheng

Ground penetrating radar (GPR) is a geophysical method that uses high frequency electromagnetic waves to detect underground or internal structures of objects. It has been widely used in the Geo-engineering and environment detection. In recent years, GPR has played an increasingly important role in shallow underwater structure survey due to its advantages of economy, high efficiency and high accuracy. However, due to the strong reflection coefficients of water surface and bottom for electromagnetic waves, there are multiples in the GPR profile acquired in waters, which will reduce the signal-to-noise ratio of the data and even lead to false imaging, finally seriously affect the reliability of the interpretation result. With the increasing requirement of high-precise GPR detection in waters, multiple suppression has become an essential issue in expanding the application fields of GPR. In order to suppress multiple waves in waterborne GPR profile, a novel multiple wave suppression method based on the combination scheme of the predictive deconvolution and free surface multiple wave suppression (SRME). Based on the validity test of one-dimensional data, the adaptive optimizations of these two methods are carried out according to the characteristics of GPR data in waters. First, the prediction step of predictive deconvolution can be determined by picking up the bottom reflection signal. Second, the water layer information provided by the bottom reflection is used in continuation from the surface to the bottom to suppress the internal multiples. The numerical model and real data test results show that each single method can suppress most of the multiples of the bottom interface and the combination strategy can further remove the additional residues. The research provides a basis for the precise interpretation of GPR data in hydro-detection.

How to cite: Zhao, Y., Shen, R., and Cheng, H.: Multiples suppression scheme of waterborne GPR data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4912, https://doi.org/10.5194/egusphere-egu22-4912, 2022.

EGU22-4914 | Presentations | GI2.2

Sensing roadway surfaces for a non-destructive assessment of pavement damage potential 

Konstantinos Gkyrtis, Andreas Loizos, and Christina Plati

Modern roadways provide road users with both a comfortable and safe ride to their destinations. Increases in traffic demands and maximum allowable loads imply that roadway authorities should also care for the structural soundness of pavements. In parallel, budgetary limitations and frequent road closures for rehabilitation activities, especially in heavy-duty motorways, might guide the related authorities to focus their strategies on the preservation of pavements functional performance. However, structural issues concerning pavement damage remain on the forefront, as pavement’s service life extends beyond its design life; thus structural condition assessment is required to ensure pavement sustainability in the long-term.

 

Non-Destructive Testing (NDT) has played a major role during condition monitoring and evaluation of rehabilitation needs. Together with input from visual inspections and/or sample destructive testing (e.g. coring), NDT data help to define indicators and threshold values that assist the related decision-making for pavement condition assessment. The most indicative tool for structural evaluation is the Falling Weight Deflectometer (FWD) that senses roadway surfaces through geophones recording load-induced deflections at various locations. Additional geophysical inspection data with the Ground Penetrating Radar (GRP) is used to estimate pavement’s stratigraphy. Integrating the above sensing data enables the estimation of pavement’s performance and its damage potential.

 

To this end, a major challenge that pavement engineers face, concerns the assumptions made about the mechanical characterization of pavement materials. Asphalt mixtures, located on the upper pavement layers, behave in a viscoelastic mode because of temperature- and loading frequency- dependency, whereas in the contrary, simplified assumptions for linear elastic materials are most commonly made during the conventional NDT analysis. In this research, an integration of mainly NDT data and sample data from cores extracted in-situ is followed to comparatively estimate the long-term pavement performance through internationally calibrated damage models considering different assumptions for asphalt materials. Two damage modes are considered including bottom-up and top-down fatigue cracks that are conceptually perceived as alligator cracks and longitudinal cracks respectively alongside a roadway’s surface. As part of an ongoing research for the long-term pavement condition monitoring, data from a new pavement was considered at this stage indicating a promising capability of NDT data towards damage assessment.

 

Overall, this study aims to demonstrate the power of pavement sensing data towards structural health monitoring of roadways pinpointing the significance of database development for a rational management throughout a roadway’s service life. Furthermore, data from limited destructive testing enriches the pavement evaluation processes with purely mechanistic perspectives thereby paving the way for developing integrated protocols with improved accuracy for site investigations, especially at project-level analysis, where rehabilitation design becomes critical.

How to cite: Gkyrtis, K., Loizos, A., and Plati, C.: Sensing roadway surfaces for a non-destructive assessment of pavement damage potential, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4914, https://doi.org/10.5194/egusphere-egu22-4914, 2022.

EGU22-5731 | Presentations | GI2.2

Ultrasonic Scattering and Absorption Imaging for the Reinforced Concrete using Adjoint Envelope Tomography 

Tuo Zhang, Christoph Sens-Schönfelder, Niklas Epple, and Ernst Niederleithinger

Seismic and ultrasound tomography can provide rich information about spatial variations of elastic properties inside a material rendering this method ideal for non-destructive testing. These tomographic methods primarily use direct and reflected waves, but are also strongly affected by waves scattering at small-scale structures below the resolution limit. As a consequence, conventional tomography has the ability to unveil the deterministic large-scale structure only, rendering scattered waves imaging noise. To image scattering and absorption properties, we presented the adjoint envelope tomography (AET) method that is based on a forward simulation of wave envelopes using Radiative Transfer Theory and an adjoint (backward) simulation of the envelope misfit, in full analogy to full-waveform inversion (FWI). In this algorithm, the forward problem is solved by modelling the 2-D multiple nonisotropic scattering in an acoustic medium with spatially variable heterogeneity and attenuation using the Monte-Carlo method. The fluctuation strength ε and intrinsic quality factor Q-1 in the random medium are used to describe the spatial variability of scattering and absorption, respectively. The misfit function is defined as the differences between the full squared observed and modelled envelopes. We derive the sensitivity kernels corresponding to this misfit function that is minimized during the iterative adjoint inversion with the L-BFGS method. This algorithm has been applied in some numerical tests (Zhang et al., 2021). In the present work, we show real data results from an ultrasonic experiment conducted in a reinforced concrete specimen. The later coda waves of the envelope processed from the 60 KHz ultrasonic signal are individually used for intrinsic attenuation inversion whose distribution has similarity to the temperature distribution of the concrete block. Based on the inversion result of intrinsic attenuation, scattering strength is inverted from early coda waves separately, which successfully provides the structure of the small-scale heterogeneity in the material. The resolution test shows that we recover the distribution of heterogeneity reasonably well.

How to cite: Zhang, T., Sens-Schönfelder, C., Epple, N., and Niederleithinger, E.: Ultrasonic Scattering and Absorption Imaging for the Reinforced Concrete using Adjoint Envelope Tomography, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5731, https://doi.org/10.5194/egusphere-egu22-5731, 2022.

EGU22-6168 | Presentations | GI2.2

An investigation into road trees’ root systems through geostatistical analysis of GPR data 

Livia Lantini, Sebastiano Trevisani, Valerio Gagliardi, Fabio Tosti, and Amir M. Alani

Street trees are a critical asset for the urban environment due to the variety of environmental and social benefits provided [1]. However, the conflicting coexistence of tree root systems with the built environment, especially with road infrastructure, frequently results in extensive damage, such as the uplifting and cracking of sidewalks and curbs, endangering pedestrians, cyclists, and road drivers’ safety.

Within this context, ground penetrating radar (GPR) is gaining recognition as an accurate non-destructive testing (NDT) method for tree roots’ assessment and mapping [2]. Nevertheless, the investigation methods developed so far are often inadequate for application on street trees, as these are often difficult to access. Recent studies have focused on implementing new survey and processing techniques for rapid tree root assessment based on combined time-frequency analyses of GPR data [3].  

This research also explores the adoption of a geostatistical approach for the spatial data analysis and interpolation of GPR data. The radial development of roots and the complexity of root network constitute a challenging setting for the spatial data analysis and the recognition of specific spatial features.

Preliminary results are therefore presented based on a geostatistical analysis of GPR data. To this end, 2-D GPR outputs (i.e., B-scans and C-scans) were analysed to quantify the spatial correlation amongst radar amplitude reflection features and their anisotropy, leading to a more reliable detection and mapping of tree roots. The proposed processing system could be employed for investigating trees difficult to access, such as road trees, where more comprehensive analyses are difficult to implement. Results' interpretation has shown the viability of the proposed analysis and will pave the way to further investigations.

 

Acknowledgements

The authors would like to express their sincere thanks and gratitude to the following trusts, charities, organisations and individuals for their generosity in supporting this project: Lord Faringdon Charitable Trust, The Schroder Foundation, Cazenove Charitable Trust, Ernest Cook Trust, Sir Henry Keswick, Ian Bond, P. F. Charitable Trust, Prospect Investment Management Limited, The Adrian Swire Charitable Trust, The John Swire 1989 Charitable Trust, The Sackler Trust, The Tanlaw Foundation, and The Wyfold Charitable Trust.

 

References

[1]         Tyrväinen, L., Pauleit, S., Seeland, K., & de Vries, S., 2005. "Benefits and uses of urban forests and trees". In: Urban Forests and Trees. Springer, Berlin, Heidelberg.

[2]         Lantini, L., Tosti, F., Giannakis, I., Zou, L., Benedetto, A. and Alani, A. M., 2020. "An Enhanced Data Processing Framework for Mapping Tree Root Systems Using Ground Penetrating Radar," Remote Sensing 12(20), 3417.

[3]         Lantini, L., Tosti, F., Zou, L., Ciampoli, L. B., & Alani, A. M., 2021. "Advances in the use of the Short-Time Fourier Transform for assessing urban trees’ root systems." Earth Resources and Environmental Remote Sensing/GIS Applications XII. Vol. 11863. SPIE, 2021.

How to cite: Lantini, L., Trevisani, S., Gagliardi, V., Tosti, F., and Alani, A. M.: An investigation into road trees’ root systems through geostatistical analysis of GPR data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6168, https://doi.org/10.5194/egusphere-egu22-6168, 2022.

EGU22-6251 | Presentations | GI2.2

Algorithms fusion for near-surface geophysical survey 

Yih Jeng, Chih-Sung Chen, and Hung-Ming Yu

The near-surface geophysical methods have been widely applied to investigations of shallow targets for scientific and engineering research. Various data processing algorithms are available to help visualize targets, data interpretation, and finally, achieve research goals.

Most of the available algorithms are Fourier-based with linear stationary assumptions. However, the real data are rarely the case and should be treated as nonlinear and non-stationary. In recent decades, a few newer algorithms are proposed for processing non-stationary, or nonlinear and non-stationary data, for instance, wavelet transform, curvelet transform, full-waveform inversion, Hilbert-Huang transform, etc. This progress is encouraging, but conventional algorithms still have many advantages, like strong theoretical bases, fast, and easy to apply, which the newer algorithms are short of.

In this study, we try to fuse both conventional and contemporary algorithms in near-surface geophysical methods. A cost-effective ground-penetrating radar (GPR) data processing scheme is introduced in shallow depth structure mapping as an example. The method integrates a nonlinear filtering technique, natural logarithmic transformed ensemble empirical mode decomposition (NLT EEMD), with the conventional pseudo-3D GPR data processing methods including background removal and migration to map the subsurface targets in 2D profile. The finalized pseudo-3D data volume is constructed by conventional linear interpolation. This study shows that the proposed technique could be successfully employed to locate the buried targets with minimal survey effort and affordable computation cost. Furthermore, the application of the proposed method is not limited to GPR data processing, any geophysical/engineering data with the similar data structure are applicable.

How to cite: Jeng, Y., Chen, C.-S., and Yu, H.-M.: Algorithms fusion for near-surface geophysical survey, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6251, https://doi.org/10.5194/egusphere-egu22-6251, 2022.

EGU22-7009 | Presentations | GI2.2

Geoelectric data modeling using Mimetic Finite Difference Method 

Deepak Suryavanshi and Rahul Dehiya

Nondestructive imaging and monitoring of the earth's subsurface using the geoelectric method require reliable and versatile numerical techniques for solving differential equation that govern the method's physic. The discrete operator should encompass fundamental properties of the original continuum model and differential operator for a robust numerical algorithm. In geoelectric modeling, critical model properties are anisotropy, irregular geometry, and discontinuous physical properties, whereas vital continuum operator properties are symmetry, the positivity of solutions, duality, and self-adjointness of differential operators and exact mathematical identities of the vector and tensor calculus. In this study, to simulate the response, we use the Mimetic Finite Difference Method (MFDM), where the discrete operator is constructed based on the support operator [1]. The MFDM operator mimics the properties mentioned above for structured and unstructured grids [2]. It is achieved by enforcing the integral identities of the continuum divergence and gradient operator to satisfy the integral identities by discrete analogs. 

The developed algorithm's accuracy is benchmarked using the analytical responses of dyke models of various conductivity contrasts for pole-pole configuration. After verifying the accuracy of the scheme, further tests are conducted to check the robustness of the algorithm involving the non-orthogonality of the grids, which is essential for simulating response for rugged topography. The surface potential is simulated using structured grids for a three-layer model. Subsequently, the orthogonal girds are distorted using pseudo-random numbers, which follow a uniform distribution. To quantify the distortion, we calculated the angles at all grid nodes. The node angles emulate a Gaussian distribution. We characterize those grids as highly distorted, for which the angle at the grid node is outside 20 to 160 degrees interval. The numerical tests are conducted by varying degrees of grid distortion, such that the highly distorted cells are from 1% to 10% of the total cells. The maximum error in surface potential stays below 1.5% in all cases. Hence, the algorithm is very stable with grid distortion and consequently can model the response of a very complex model. Thus, the developed algorithm can be used to analyze geoelectrical data of complex geological scenarios such as rugged topography and anisotropic subsurface. 

[1] Winters, Andrew R., and Mikhail J. Shashkov. Support Operators Method for the Diffusion Equation in Multiple Materials. No. LA-UR-12-24117. Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2012.

[2] Lipnikov, Konstantin, Gianmarco Manzini, and Mikhail Shashkov. "Mimetic finite difference method." Journal of Computational Physics 257 (2014): 1163-1227.

How to cite: Suryavanshi, D. and Dehiya, R.: Geoelectric data modeling using Mimetic Finite Difference Method, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7009, https://doi.org/10.5194/egusphere-egu22-7009, 2022.

EGU22-7547 | Presentations | GI2.2

Assessing Deformation Monitoring Systems For Supporting Structural Rehabilitation under Harsh Conditions 

Hans Neuner, Victoria Kostjak, Finn Linzer, Walter Loderer, Christian Seywald, Alfred Strauss, Matthias Rigler, and Markus Polt

This paper deals with the evaluation of four measuring systems for the detection of potential deformations that can occur during structural rehabilitation measures. For this purpose, a test object resembling the shape of a tunnel structure was constructed. The structural properties of this test object are discussed in the related paper by Strauss et. al submitted for the same session.

In the paper, the installed measuring systems are presented first. These are a lamella system based on fibre optics, an array of accelerometers, a digital image correlation system and a profile laser scanner based system. The operating principles of the systems are briefly introduced.

A long-term measurement on the object in an unloaded state, which extended over several weeks, enables statements about the capturing of temperature-related deformations, the temperature dependence of the measured values and drift effects of the investigated systems. Selective loading of the test object was generated via four screw rods and applied both in the elastic as well as in the plastic deformation range. This enabled knowledge gain regarding the precision and the sensitivity of the analysed measuring systems.

Environmental conditions may have a strong influence on the measurement values. The former can be determined by permanent installations on the structure and its operating conditions as well as by the undertaken rehabilitation measures. Representative for the first category we investigated the influence of magnetic fields and light conditions on the measuring systems. For the second category, strong dust formation and increased humidity were generated during a test procedure.

An assessment regarding data handling, including storage, transfer and processing, completes the investigation of the four measuring systems. A summarising evaluation concludes the article.

How to cite: Neuner, H., Kostjak, V., Linzer, F., Loderer, W., Seywald, C., Strauss, A., Rigler, M., and Polt, M.: Assessing Deformation Monitoring Systems For Supporting Structural Rehabilitation under Harsh Conditions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7547, https://doi.org/10.5194/egusphere-egu22-7547, 2022.

EGU22-8512 | Presentations | GI2.2

Verification of the performance of reinforced concrete profiles of alpine infrastructure systems assisted by innovative monitoring 

Alfred Strauss, Hans Neuner, Matthias Rigler, Markus Polt, Christian Seywald, Victoria Kostjak, Finn Linzer, and Walter Loderer

The verification of the structural behaviour of existing structures and its materials characteristics requires the application of tests and monitoring to gather information about the actual response. The comparison of the actual performance and the designed performance enables the verification of the design assumptions in terms of implied loads and materials resistance. In case of non-compliance of the designed with the current performance, the design assumptions need to be updated. The objective of this contribution is to provide a guidance for the verification of the performance of reinforced concrete profiles of alpine infrastructure systems like tunnels assisted by monitoring, testing and material testing.

The application of defined loads to a structure to verify its load carrying capacity is a powerful tool for evaluating existing structures. In particular, in this research different types of load tests are employed depending on the limit state which is being investigated on tunnel profiles, on the other hand, the system responses to validate the structural performance are recorded with monitoring systems innovative in tunnel systems, such as accelerometer arrays, fibre optic sensors, laser distance sensors and digital image correlation system, see also the related paper by Neuner et. al. In these studies we also pay special attention to the capabilities of Digital Image Correlation and Nonlinear Finite Element Analysis. Digital Image Correlation (often referred to as "DIC") is an easy-to-use optical method for measuring deformations on the surface of an object. The method tracks changes in the grayscale pattern in small areas called subsets) during deformation. 

Finally, we will present the process for the implementation and validation of proof loading concepts based on the mentioned monitoring information in order to derive the existing safety level by using advanced digital twin models.  

How to cite: Strauss, A., Neuner, H., Rigler, M., Polt, M., Seywald, C., Kostjak, V., Linzer, F., and Loderer, W.: Verification of the performance of reinforced concrete profiles of alpine infrastructure systems assisted by innovative monitoring, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8512, https://doi.org/10.5194/egusphere-egu22-8512, 2022.

EGU22-8594 | Presentations | GI2.2

Analysis of low-frequency drone-borne GPR for soil surface electrical conductivity mapping 

Kaijun Wu and Sébastien Lambot

In the VHF frequencies, the sensitivity of the reflection coefficient at the air-soil interface with respect to the soil electromagnetic properties, i.e., the dielectric permittivity and electrical conductivity, varies with frequency. The lower the frequency is, the lower the sensitivity to permittivity is and the larger the sensitivity to conductivity is. In this study, we investigated low-frequency drone-borne ground-penetrating radar (GPR) and full-wave inversion for soil surface electrical conductivity characterization. In order to have a good sensitivity to electrical conductivity, we operated in the 15-45 MHz frequency range. We conducted both numerical and field experiments, under the assumptions that the soil magnetic permeability is equal to the magnetic permeability of free space, and that the soil permittivity and conductivity are frequency-independent. Through the numerical experiments, we analyzed the sensitivity of the soil permittivity and electrical conductivity by plotting the objective function in the inverse problem. In addition, we analyzed the effects of modelling errors on the retrieval of the permittivity and conductivity. The results show that the soil electrical conductivity is sensitive enough to be characterized by the low-frequency drone-borne GPR. The depth of sensitivity was found to be around 0.5-1 m in the 15-45 MHz frequency range. Yet, the effects of permittivity cannot be neglected totally, especially for relatively wet soils. For validating our approach, we conducted field measurements with the drone-borne GPR and we compared results with electromagnetic induction (EMI) measurements considering two different offsets, i.e., 0.5 and 1 m, respectively. The lightweight GPR system consists of a handheld vector network analyzer (VNA), a 5-meter half-wave dipole antenna, a micro-computer stick, a GPS receiver, and a power bank. The good agreement in terms of absolute values and field structures between the GPR and EMI maps demonstrated the feasibility of the proposed low-frequency drone-borne GPR method, which appears thereby to be promising for precision agriculture applications.

How to cite: Wu, K. and Lambot, S.: Analysis of low-frequency drone-borne GPR for soil surface electrical conductivity mapping, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8594, https://doi.org/10.5194/egusphere-egu22-8594, 2022.

EGU22-8712 | Presentations | GI2.2

Estimation of point spread function for unmixing geological spectral mixtures 

Maitreya Mohan Sahoo, Arun Pattathal Vijayakumar, Ittai Herrmann, Shibu K. Mathew, and Alok Porwal

Geological materials are mixtures of different endmember constituents with most of them having particles smaller in size than the path length of incident light. The obtained spectral response (reflectance) from such mixtures is nonlinear which can be attributed to multiple scattering of light and the receiver sensor’s height from the incident surface. Assuming a sensor’s fixed instantaneous field of view (IFOV), variation in its field of view (FOV) by shifting its height affects the spatial resolution of acquired spectra. We propose to estimate the point spread function (PSF) for which the spectral responses of fine-resolution pixels acquired by a sensor are mixed to produce a coarse-resolution pixel obtained by the same. Our approach is based on the sensor’s unchanged IFOV obtaining spectral information from a smaller ground resolution cell (GRC) at a lower FOV and a larger GRC with an increased sensor’s FOV. The larger GRC producing a coarse resolution pixel can be modeled as a gaussian PSF of its corresponding center and neighboring fine-resolution subpixels with the center exerting the maximum influence. Extensive experiments performed using a point-based sensor and a push broom scanner revealed such variational effects in PSF that are dependent on the sensor’s FOV, the spatial interval of acquisition, and optical properties. The coarse-resolution pixels’ spectra were regressed with their corresponding fine-resolution subpixels to provide estimates of the PSF values that assumed the shape of a two-dimensional Gaussian function. Constraining these values as sum-to-one introduced sparsity and explained variability in the spectral acquisition by different sensors.  The estimated PSFs were further validated through the linear spectral unmixing technique. It was observed that the fractional abundances obtained for the fine-resolution subpixels convolved with our estimated PSF to produce its corresponding coarse-resolution counterpart with minimal error. The obtained PSFs using different sensors also explained spectral mixing at different scales of observation and provided a basis for nonlinear unmixing integrating spatial as well as spectral effects and addressing endmember variability. We performed our experiments with various coarse-grained and fine-grained igneous and sedimentary rocks under laboratory conditions to validate our results which were compared with available literature. 

How to cite: Sahoo, M. M., Pattathal Vijayakumar, A., Herrmann, I., Mathew, S. K., and Porwal, A.: Estimation of point spread function for unmixing geological spectral mixtures, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8712, https://doi.org/10.5194/egusphere-egu22-8712, 2022.

EGU22-9441 | Presentations | GI2.2

Water use efficiency (WUE) Modeling at Leaf level of Cotton (Gossypium hirsutum L.) in Telangana, India 

Shreedevi Moharana and Phanindra BVN Kambhammettu

Water use efficiency (WUE) plays a vital role in planning and management of irrigation strategies. Considering the spatial scale, WUE can be quantified at scales ranging from leaf to whole-plant to ecosystem to region. However, the inter-relation and their associate is poorly understood. This study is aimed at stimulating WUE of irrigated cotton at leaf () and further investigate the role of environmental and biophysical conditions on WUE dynamics. This study was conducted in an agricultural croplands located in Sangareddy district, about 70 km west of Hyderabad, the capital city of southern state Telangana, India. Ground based observation were made such as soil moisture, photosynthetic parameters and meteorological parameters. Modelling leaf water use efficiency has been established. The stomatal conductance  and  of cotton leaves exposed to ambient CO2 were simulated using Ball-Berry (mBB) model. Moreover, the stomatal conductance  and  of Cotton leaves exposed to ambient CO2 is simulated using modified Ball-Berry model, with instantaneous gas exchanges measured around noon used to parameterize and validate the model. We observed a large diurnal (4.3±1.9 mmolCO2 mol-1H2O) and seasonal (5.16±1.51 mmolCO2 mol-1H2O) variations in  during the crop period. Model simulated  and  are in agreement with the measurements (R2>0.5, RMSE<0.3). Our results conclude that WUE is ruled by climatic as well as vegetative factors respectively, and are largely controlled by changes in transpiration over photosynthesis. This needs further investigation with extensive analysis by building library of in-situ measurements.

 

Keywords: Cotton, WUE, Irrigation, Stomatal conductance, Ball Berry Model

How to cite: Moharana, S. and Kambhammettu, P. B.: Water use efficiency (WUE) Modeling at Leaf level of Cotton (Gossypium hirsutum L.) in Telangana, India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9441, https://doi.org/10.5194/egusphere-egu22-9441, 2022.

EGU22-9845 | Presentations | GI2.2

Implementation of an interoperable platform integrating BIM and GIS information for network-level monitoring and assessment of bridges 

Luca Bertolini, Antonio Napolitano, Jhon Diezmos Manalo, Valerio Gagliardi, Luca Bianchini Ciampoli, and Fabrizio D'Amico

Monitoring of critical civil engineering infrastructures, and especially viaducts and bridges, has become a priority nowadays as the ageing of construction materials may cause damages and collapses with dramatic consequences. Following recent bridge collapses, specific guidelines on risk classification and management, safety assessment and monitoring of existing bridges have been issued in Italy, by the Minister of Infrastructure as a mandatory code [1]. Accordingly, several laws and regulations have been issued on the same topic, emphasizing the crucial role of BIM-based procedures for the design and management of civil infrastructures [2, 3]. Within this context, monitoring operations are generally conducted by on-site inspections and specialized operators, and rarely by high-frequency ground-based Non-Destructive Testing methods (NDTs). Furthermore, the implementation of satellite-based remote sensing techniques, have been increasingly and effectively used for the monitoring of bridges in the last few years [4]. Generally, these crucial pieces of information are analyzed separately, and the implementation of a multi-scale and multi-source interoperable BIM platform is still an open challenge [5].

This study aims at investigating the potential of an interoperable and upgradeable BIM platform supplemented by non-destructive survey data, such as Mobile Laser Scanner (MLS), Ground Penetrating Radar (GPR) and Satellite Remote Sensing Information (i.e. InSAR). The main goal of the research is to contribute to the state-of-the-art knowledge on BIM applications, by testing an infrastructure management platform aiming at reducing the limits typically associated to the separate observation of these assessments, to the advantage of an integrated analysis including both the design information and the routinely updated results of monitoring activities.

The activities were conducted in the framework of the Project “M.LAZIO”, approved by the Lazio Region, with the aim to develop an informative BIM platform of the investigated bridges interoperable within a Geographic Information System (GIS). As on-site surveys are carried out , a preliminary multi-source database of information  is created, to be operated as the starting point for the integration process and the development of  the infrastructure management platform. Preliminary results have shown promising viability of the data management model for supporting asset managers in the various management phases, thereby proving this methodology to be worthy for implementation in infrastructure integrated monitoring plans.

Acknowledgements

This research is supported by the Project “M.LAZIO”, accepted and funded by the Lazio Region, Italy. Funding from MIUR, in the frame of the “Departments of Excellence Initiative 2018–2022”, attributed to the Department of Engineering of Roma Tre University, is acknowledged.

References

[1] MIT, 2020. Ministero delle Infrastrutture e dei Trasporti, DM 578/2020

[2] EU, 2014. Directive 2014/24/EU of the European Parliament and of the Council of 26 February 2014 on public procurement and repealing Directive 2004/18/EC.

[3] MIMS, 2021. Ministero delle Infrastrutture e della Mobilità Sostenibile, DM 312/2021

[4] Gagliardi, V. et al., “Bridge monitoring and assessment by high-resolution satellite remote sensing technologies”. In SPIE Future Sensing Technologies; https://doi.org/10.1117/12.2579700

[5] D'Amico F. et al., "A novel BIM approach for supporting technical decision-making process in transport infrastructure management", Proc. SPIE 11863;  https://doi.org/10.1117/12.2600140

How to cite: Bertolini, L., Napolitano, A., Diezmos Manalo, J., Gagliardi, V., Bianchini Ciampoli, L., and D'Amico, F.: Implementation of an interoperable platform integrating BIM and GIS information for network-level monitoring and assessment of bridges, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9845, https://doi.org/10.5194/egusphere-egu22-9845, 2022.

Knowledge of the monument for its conservation is the result of a multidisciplinary work based on the integration of different data sources obtainable from historical research, architectural survey, the use of different imaging technologies. The latter are increasingly within the reach of conservators, architects and restoration companies thanks to the reduction of costs and to the effort to produce increasingly user-friendly imaging technologies both in terms of data acquisition and processing. The critical element is the interpretation of the results on which depends the effectiveness of these technologies in answering various questions that the restoration poses. Scientific literature suggests different approaches aimed at making the interpretation of imaging diagnostics easier, particularly by means of : i) the comparison between direct data (carrots, visual inspection) and results from non-invasive tests; ii) the use of specimens or laboratory test beds; iii) Virtual and Augmented reality (VR/AR) to be used as a work environment to facilitate the interpretation of non invasive imaging investigations. In particular, the reading and visualization of multiparametric information using VR/AR contents increases the standard modes for the transmission of knowledge of physical characteristics and state of conservation of the architectural heritage. This approach represents an effective system for storing and analysing heterogeneous data derived from a number of diverse non invasive imaging techniques, including Ground Penetrating radar (GPR) at high frequency, Infrared Thermography (IRT), Seismic tomography and other diagnostics techniques. In the context of Heritage Within Project, a VR/AR platform to interrelate heterogeneous data derived from GPR, IRT, Ultrasonic and sonic measurements along with  results finite element computations has been developed and applied to the Convent of Our Lady of Mount Carmel  in Lisbon to understand cause-and-effect mechanisms between the constructive characteristics, degradation pathologies and stress/deformation maps.

References

Gabellone F., Leucci G., Masini N., Persico R., Quarta G., Grasso F. 2013. Non-destructive prospecting and virtual reconstruction of the chapel of the Holy Spirit in Lecce, Italy. Near Surface Geophysics, doi: 10.3997/1873-0604.2012030

Gabellone F., Chiffi M., “Linguaggi digitali per la valorizzazione”, in F. Gabellone, M. T. Giannotta, M. F. Stifani, L. Donateo (a cura di), Soleto Ritrovata. Ricerche archeologiche e linguaggi digitali per la fruizione. Editrice Salentina, 2015. ISBN 978-88-98289-50-9

Masini N., Nuzzo L., Rizzo E., GPR investigations for the study and the restoration of the Rose Window of Troia Cathedral (Southern Italy), Near Surface Geophysics, 5 (5)(2007), pp. 287-300, ISSN: 1569-4445; doi: 10.3997/1873-0604.2007010 

Masini N., Soldovieri F. (Eds) (2017). Sensing the Past. From artifact to historical site. Series: Geotechnologies and the Environment, Vol. 16. Springer International Publishing, ISBN: 978-3-319-50516-9, doi: 10.1007/978-3-319-50518-3, pp. 575

Javier Ortega, Margarita González Hernández, Miguel Ángel García Izquierdo, Nicola Masini, et al. (2021). Heritage Within. European Research Project, ISBN: 978-989-54496-6-8, Braga 2021.

How to cite: Masini, N., Gabellone, F., and Ortega, J.: VR/AR based approach for the diagnosis of the state of conservation of the architectural heritage. The case of the Convento do Carmo in Lisbon, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10538, https://doi.org/10.5194/egusphere-egu22-10538, 2022.

EGU22-11201 | Presentations | GI2.2

DIARITSup: a framework to supervise live measurements, Digital Twins modelscomputations and predictions for structures monitoring. 

Jean Dumoulin, Thibaud Toullier, Mathieu Simon, and Guillermo Andrade-Barroso

DIARITSup is a chain of various softwares following the concept of ”system of systems”. It interconnects hardware and software layers dedicated to in-situ monitoring of structures or critical components. It embeds data assimilation capabilities combined with specific Physical or Statistical models like inverse thermal and/or mechanical ones up to the predictive ones. It aims at extracting and providing key parameters of interest for decision making tools. Its framework natively integrates data collection from local sources but also from external systems [1, 2]. DIARITSup is a milestone in our roadmap for SHM Digital Twins research framework. Furthermore, it intends providing some useful information for maintenance operations not only for surveyed targets but also for deployed sensors.

Thanks to its Model-view-controller (MVC) design pattern, DIARITSup can be extended, customized and connected to existing applications. Its core component is made of a supervisor task that handles the gathering of data from local sensors and external sources like the open source meteorological data (observations and forecasts) from Météo-France Geoservice [4] for instance. Meanwhile, a recorder manage the recording of all data and metadata in the Hierarchical Data Format (HDF5) [6]. HDF5 is used to its full potential with its Single-Writer-Multiple-Readers feature that enables a graphical user interface to represent the saved data in real-time, or the live computation of SHM Digital Twins models [3] for example. Furthermore, the flexibility of HDF5 data storage allows the recording of various type of sensors such as punctual sensors or full field ones. Finally, DIARITSup is able to handle massive deployment thanks to Ansible [5] automation tool and a Gitlab synchronization for automatic updates. An overview of the developed software with a real application case will be presented. Perspectives towards improvements on the software with more component integrations (Copernicus Climate Data Store, etc.) and a more generic way to configure the acquisition and model configuration will be finally discussed.


References
[1] Nicolas Le Touz, Thibaud Toullier, and Jean Dumoulin. “Infrared thermography applied to the study of heated and solar pavement: from numerical modeling to small scale laboratory experiments”. In: SPIE - Thermosense: Thermal Infrared Applications XXXIX. Anaheim, United States, Apr. 2017. url: https://hal.inria.fr/hal-01563851.
[2] Thibaud Toullier, Jean Dumoulin, and Laurent Mevel. “Study of measurements bias due to environmental and spatial discretization in long term thermal monitoring of structures by infrared thermography”. In: QIRT 2018 - 14th Quantitative InfraRed Thermography Conference. Berlin, Germany, June 2018. url: https://hal.inria.fr/hal-01890292.
[3] Nicolas Le Touz, Thibaud Toullier, and Jean Dumoulin. “Study of an optimal heating command law for structures with non-negligible thermal inertia in varying outdoor conditions”. In: Smart Structures and Systems 27.2 (2021), pp. 379–386. doi: 10.12989/sss.2021.27.2.379. url: https://hal.inria.fr/hal-03145348.
[4] Météo France. Données publiques Météo France. 2022. url: https://donneespubliques.meteofrance.fr.
[5] Red Hat & Ansible. Ansible is Simple IT Automation. 2022. url: https://www.ansible.com/.
[6] The HDF Group. Hierarchical Data Format, version 5. 1997-2022. url: https://www.hdfgroup.org/HDF5/.

How to cite: Dumoulin, J., Toullier, T., Simon, M., and Andrade-Barroso, G.: DIARITSup: a framework to supervise live measurements, Digital Twins modelscomputations and predictions for structures monitoring., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11201, https://doi.org/10.5194/egusphere-egu22-11201, 2022.

EGU22-12743 | Presentations | GI2.2

Integrating Remote Sensing data to assess the protective effect of forests on rockfall:The case study of Monte San Liberatore (Campania, Italy) 

Alessandro Di Benedetto, Antonella Ambrosino, and Margherita Fiani

In recent years, great interest has been paid to the risk that hydrogeological instability causes to the territory, especially in densely populated and geologically fragile areas. 
The forests, exerting a natural restraint, play an important protective function for the infrastructures and settlements underneath from the danger of falling rocks that fall from the rocky walls. This protective action is influenced not only by issues related to the vegetation itself but also by the morphology of the terrain, as a steeply sloping land surface can significantly increase the momentum of the rolling rock.
The aim of our work is to design a methodology based on the integration of remote sensing data, in detail optical satellite images and LiDAR data acquired by UAVs, to identify areas most prone to natural rockfall retention [1]. The results could then be used to identify areas that need to be reinforced artificially (rockfall nets) and naturally (protective forests).
The test area is located near Monte San Liberatore in the Campania region (Italy), which was affected in 1954 by a disastrous flood, in which heavy rains induced the triggering of a few complex landslides in a region that was almost geomorphologically susceptible.  Indeed, there are several areas subject to high risk of rockfalls, whose exposed value is represented by a complex infrastructural network of viaducts, tunnels, and galleries along the north-west slope of the mountain, which is partly covered by thick vegetation, which reduces the rolling velocity of rocks detaching from the ridge. 
According to the Carta della Natura, the vegetation most present in the area is the holm oak (Quercus Ilex), an evergreen, long-lived, medium-sized tree. Its taproot makes it resistant and stable, able to survive in extremely severe environments such as rocky soils or vertical walls, so it is ideal for slope protection.
The first processing step involved the multispectral analysis on Pleiades 1A four-band (RGB +NIR) high-resolution satellite images (HRSI). The computed vegetation indices (NDVI, RVI and NDWI) were used to assess the vegetation health status and its presumed age; thus, the most resilient areas of the natural compartment in terms of robustness and vigor were identified. The average plant height was determined using the normalized digital surface model (nDSM).
Next, starting from the Digital Terrain Model (DTM), we derived the morphometric features suitable for the description of the slope dynamics: slope gradient, exposure with respect to the North direction, plane, and convexity profile. The DTM and the DSM were created by interpolating on a grid the LiDAR point cloud acquired via UAV. Classification of areas having similar characteristics was made using SOM (Self-Organizing Maps), based on unsupervised learning.
The classified maps obtained delimit the similar areas from a morphological and vegetation point of view; in this way, all those areas that tend to have a higher propensity for rock roll reduction were identified.

[1] Fanos, Ali Mutar, and Biswajeet Pradhan. "Laser scanning systems and techniques in rockfall source identification and risk assessment: a critical review." Earth Systems and Environment 2.2 (2018): 163-182.

How to cite: Di Benedetto, A., Ambrosino, A., and Fiani, M.: Integrating Remote Sensing data to assess the protective effect of forests on rockfall:The case study of Monte San Liberatore (Campania, Italy), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12743, https://doi.org/10.5194/egusphere-egu22-12743, 2022.

EGU22-13153 | Presentations | GI2.2

Integration of multiple geoscientific investigation methods for a better understanding of a water system: the example of Chimborazo glaciers melting effects on the Chambo aquifer, Ecuador 

Andrea Scozzari, Paolo Catelan, Francesco Chidichimo, Michele de Biase, Benito G. Mendoza Trujillo, Pedro A. Carrettero Poblete, and Salvatore Straface

The identification of the processes underlining natural systems often requires the adoption of multiple investigation techniques for the assessment of the sites under study. In this work, the combination of information derived from non-invasive sensing techniques, such as geophysics, remote sensing and hydrogeochemistry, highlights the possible influence of global climate change on the future water availability related to an aquifer in a peculiar glacier context, located in central Ecuador. In particular, we show that the Chambo aquifer, which supplies potable water to the region, does not contain fossil water, and it’s instead recharged over time. Indeed, the whole Chambo river basin is affected by the Chimborazo volcano, which is a glacerised mountain located in the inner tropics, one of the most critical places  to be observed in the frame of climate impact on water resources. Thanks to the infomation gathered by the various surveying techniques, numerical modelling permitted an estimate of the recharge, which can be fully originated by the runoff from Chimborazo melting glaciers. Actually, the retreat of the glaciers on top of the Chimborazo is an ongoing process presumably related to global climate change.

How to cite: Scozzari, A., Catelan, P., Chidichimo, F., de Biase, M., Mendoza Trujillo, B. G., Carrettero Poblete, P. A., and Straface, S.: Integration of multiple geoscientific investigation methods for a better understanding of a water system: the example of Chimborazo glaciers melting effects on the Chambo aquifer, Ecuador, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13153, https://doi.org/10.5194/egusphere-egu22-13153, 2022.

EGU22-13401 | Presentations | GI2.2

Tunnel deformation rate analysis based on PS-InSAR technique and stress-area method  

Long Chai, Xiongyao Xie, Pan Li, Biao Zhou, and Li Zeng

The permanent scatterer synthetic aperture radar interferometry (PS-InSAR) technique can detect the permanent scatterers(PSs) on the ground. But the deformation of PSs can’t be used to analyze the deformation of underground buildings below the ground surface directly, such as tunnels. In this paper, the process of tunnel deformation analysis using PSs data and stress-area method is proposed. The deformation data of PSs are used to fit the surface deformation of tunnel by kriging interpolation method. The stress area method is used to calculate the deformation of the soil above the tunnel, then the deformation of tunnel can be acquired. This process was applied to calculate the deformation of a tunnel in Shanghai, China. The results show that the fitted surface deformation rate data are accurate, with the maximum absolute difference of 1.45mm/y and the minimum difference of 0.11mm/y compared with the level monitoring data. The tunnel deformation rate calculated by this process is close to the measured deformation rate of the tunnel with error level in millimeters. The surface and tunnel deformation rate curves are similar in the tunnel extension direction. PS-InSAR technique has the advantages of acquiring large area, historical data of surface deformation. Combined with the process proposed in this paper, Large-scale tunnel deformation analysis can be achieved.

How to cite: Chai, L., Xie, X., Li, P., Zhou, B., and Zeng, L.: Tunnel deformation rate analysis based on PS-InSAR technique and stress-area method , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13401, https://doi.org/10.5194/egusphere-egu22-13401, 2022.

EGU22-13441 | Presentations | GI2.2

Collaborative use of ground monitoring and GPR data for the control of ground settlement in shield tunnel in soft soil 

Kang Li, Xiongyao Xie, Xiaobin Zhang, Biao Zhou, Tenfei Qu, and Li Zeng

In recent years, China's construction demand for shield tunnel in soft soil continues to increase, and the control of ground settlement in tunnel boring process affects the safety of the tunnel itself and its superstructure directly. Paying close attention to controlling the strata loss and the ground settlement by multiple means is important to ensure construction safety. In this paper, the intelligent real-time monitoring system with dual-frequency ground penetrating radar (GPR) is used to detect the quality of back-fill grouting of shield tunnel, while monitoring points are arranged on the ground surface to acquire the settlement values in real time. The collaborative analysis of ground and underground monitoring results reveals the relationship between grouting and settlement values, and realizes the dynamic guidance on grouting operation, which helps to achieve the purpose of controlling ground settlement better. Last but not least, this paper proposes an outlook on a multiple-data fusion system based on cloud computing platform to adapt to more complex and multiple data in the future, so as to achieve the higher accuracy, efficiency and intelligence of monitoring data analysis.

How to cite: Li, K., Xie, X., Zhang, X., Zhou, B., Qu, T., and Zeng, L.: Collaborative use of ground monitoring and GPR data for the control of ground settlement in shield tunnel in soft soil, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13441, https://doi.org/10.5194/egusphere-egu22-13441, 2022.

EGU22-13515 | Presentations | GI2.2

Application of ground penetrating radar (GPR) in look-ahead detection of slurry balance shield machine 

Weiwei Duan, Xiongyao Xie, Yong Yang, Kun Zeng, Huiming Wu, Li Zeng, and Kang Li

The shield machine has become the mainstream of subway tunnels construction because of its safety and efficiency. But with the continuous development of urban construction, the environment of subway tunnel construction is becoming more and more complex. In the process of shield tunnels construction in southern cities of China, slurry balance shield machines often encounter various obstacles, such as large diameter boulders and concrete pile foundations, which result in accidents of shield machine sticking. Therefore, it is necessary to quickly and accurately detect the distribution of obstacles in front of shield excavation face in advance so that operators can in time take measures to reduce the occurrence of such accidents. Ground penetrating radar (GPR) is a method widely used in engineering geological exploration. It has advantages of small working space, high efficiency and no damage compared with other detecting methods. When the GPR antenna is mounted on the cutter head of the shield machine, the obstacles in the stratum ahead of the shield machine can be detected in real time. Under this condition the GPR antenna’s real work mode is that it will rotate with the cutter head to form a circumferential survey line. Based on Finite-Difference-Time-Domain-Method (FDTD), authors use the common numerical simulation software (GPRMAX) to make simulations of GPR circumferential detection under the antenna array rotating with the cutter head, which verifies the theoretical feasibility of this method. By simulating radar emission and reflection pattern of electromagnetic wave, we study the propagation pattern of the reflect wave after encountering the obstacles and conclude the image pattern to establish the foundation for image recognition of obstacles. Due to the radar wave being susceptible to electromagnetic interference, GPR is still lack of engineering practice in shield advanced detection. To reduce the interference of the surrounding metal cutter head, a new strip radar antenna with a shielding shutter is designed to improve the directivity of electromagnetic wave propagation. Several antennas are fixed at several slurry openings of the cutter head of slurry balance shield machine to form radar antenna array and improve detection efficiency and accuracy.

How to cite: Duan, W., Xie, X., Yang, Y., Zeng, K., Wu, H., Zeng, L., and Li, K.: Application of ground penetrating radar (GPR) in look-ahead detection of slurry balance shield machine, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13515, https://doi.org/10.5194/egusphere-egu22-13515, 2022.

EGU22-1024 | Presentations | ITS3.5/NP3.1

Efficiency and synergy of simple protective measures against COVID-19: Masks, ventilation and more 

Ulrich Pöschl, Yafang Cheng, Frank Helleis, Thomas Klimach, and Hang Su

The public and scientific discourse on how to mitigate the COVID-19 pandemic is often focused on the impact of individual protective measures, in particular on vaccination. In view of changing virus variants and conditions, however, it seems not clear if vaccination or any other protective measure alone may suffice to contain the transmission of SARS-CoV-2. Accounting for both droplet and aerosol transmission, we investigated the effectiveness and synergies of vaccination and non-pharmaceutical interventions like masking, distancing & ventilation, testing & isolation, and contact reduction as a function of compliance in the population. For realistic conditions, we find that it would be difficult to contain highly contagious SARS-CoV-2 variants by any individual measure. Instead, we show how multiple synergetic measures have to be combined to reduce the effective reproduction number (Re) below unity for different basic reproduction numbers ranging from the SARS-CoV-2 ancestral strain up to measles-like values (R0 = 3 to 18).

Face masks are well-established and effective preventive measures against the transmission of respiratory viruses and diseases, but their effectiveness for mitigating SARS-CoV-2 transmission is still under debate. We show that variations in mask efficacy can be explained by different regimes of virus abundance (virus-limited vs. virus-rich) and are related to population-average infection probability and reproduction number. Under virus-limited conditions, both surgical and FFP2/N95 masks are effective at reducing the virus spread, and universal masking with correctly applied FFP2/N95 masks can reduce infection probabilities by factors up to 100 or more (source control and wearer protection).

Masks are particularly effective in combination with synergetic measures like ventilation and distancing, which can reduce the viral load in breathing air by factors up to 10 or more and help maintaining virus-limited conditions. Extensive experimental studies, measurement data, numerical calculations, and practical experience show that window ventilation supported by exhaust fans (i.e. mechanical extract ventilation) is a simple and highly effective measure to increase air quality in classrooms. This approach can be used against the aerosol transmission of SARS-CoV-2. Mechanical extract ventilation (MEV) is very well suited not only for combating the COVID19 pandemic but also for sustainably ventilating schools in an energy-saving, resource-efficient, and climate-friendly manner.  Distributed extract ducts or hoods can be flexibly reused, removed and stored, or combined with other devices (e.g. CO2 sensors), which is easy due to the modular approach and low-cost materials (www.ventilationmainz.de).

The scientific findings and approaches outlined above can be used to design, communicate, and implement efficient strategies for mitigating the COVID-19 pandemic.

References:

Cheng et al., Face masks effectively limit the probability of SARS-CoV-2 transmission, Science, 372, 1439, 2021, https://doi.org/10.1126/science.abg6296 

Klimach et al., The Max Planck Institute for Chemistry mechanical extract ventilation (MPIC-MEV) system against aerosol transmission of COVID-19, Zenodo, 2021, https://doi.org/10.5281/zenodo.5802048  

Su et al., Synergetic measures to contain highly transmissible variants of SARS-CoV-2, medRxiv, 2021, https://doi.org/10.1101/2021.11.24.21266824

 

How to cite: Pöschl, U., Cheng, Y., Helleis, F., Klimach, T., and Su, H.: Efficiency and synergy of simple protective measures against COVID-19: Masks, ventilation and more, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1024, https://doi.org/10.5194/egusphere-egu22-1024, 2022.

EGU22-1890 | Presentations | ITS3.5/NP3.1

Possible effect of the particulate matter (PM) pollution on the Covid-19 spread in southern Europe 

Jean-Baptiste Renard, Gilles Delaunay, Eric Poincelet, and Jérémy Surcin

The time evolution of the Covid-19 death cases exhibits several distinct episodes since the start of the pandemic early in 2020. We propose an analysis of several Southern Europe regions that highlights how the beginning of each episode correlates with a strong increase in the concentrations level of pollution particulate matter smaller than 2.5 µm (PM2.5). Following the original PM2.5 spike, the evolution of the Covid-19 spread depends on the (partial) lockdowns and vaccinate races, thus the highest level of confidence in correlation can only be achieved when considering the beginning of each episode. The analysis is conducted for the 2020-2022 period at different locations: the Lombardy region (Italy), where we consider the mass concentrations measurements obtained by air quality monitoring stations (µg.m-3), and the cities of Paris (France), Lisbon (Portugal) and Madrid (Spain) using in-situ measurements counting particles (cm-3) in the 0.5-2.5 µm size range obtained with hundreds of mobile aerosol counters. The particle counting methodology is more suitable to evaluate the possible correlation between PM pollution and Covid-19 spread because we can better estimate the concentration of the submicronic particles compared with a mass concentration measurement methodology which would result in skewed results due to larger particles. Very fine particles of lesser than one micron go deeper inside the body and can even cross the alveolar-capillary barrier, subsequently attacking most of the organs through the bloodstream, potentially triggering a pejorative systemic inflammatory reaction. The rapidly increasing number of deaths attributed to the covid-19 starts between 2 weeks and one month after PM events that often occur in winter, which is coherent with the virus incubation time and its lethal outcome. We suggest that the pollution by the submicronic particles alters the pulmonary alveoli status and thus significantly increase the lungs susceptibility to the virus.

How to cite: Renard, J.-B., Delaunay, G., Poincelet, E., and Surcin, J.: Possible effect of the particulate matter (PM) pollution on the Covid-19 spread in southern Europe, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1890, https://doi.org/10.5194/egusphere-egu22-1890, 2022.

In the past two years, numerous advances have been made in the ability to predict the progress of COVID19 epidemics.  Basic forecasting of the health state of a population with respect to a given disease is based on the well-known family of SIR models (Susceptible Infected Recovered). The models used in epidemiology were based on deterministic behavior, so the epidemiological picture tomorrow depends exclusively on the numbers recorded today. The forecasting shortcomings of the deterministic SEIR models previously used in epidemiology were difficult to highlight before the advent of COVID19  because epidemiology was mostly not concerned with real-time forecasting.  From the first wave of COVID19 infections, the limitations of using deterministic models were immediately evident: to use them, one should know the exact status of the population and this knowledge was limited by the ability to process swabs. Futhermore, there is an intrinsic variability of the dynamics which depends on age, sex, characteristics of the virus, variants and vaccination status. 

Our main contribution was to define a SEIR model that assumes these parameters as constants could not be used for reliable predictions of COVID19 pandemis and that more realistic forecasts can be obtained by adding fluctuations in the model. The fluctuations in the dynamics of the virus induced by these factors do not just add variaiblity around the deterministic solution of the SIR models, the also introduce another timing of the pandemics which influence the epidemic peak. With our model we have found that even with a basic reprdocution number Rt less than 1 local epidemic peaks can occur that resume over a certain period of time. 

Introducing noise and uncertainty allows  to define a range of possible scenarios, instead of making a single prediction. This is what happens when we replace the deterministic approach, with a probabilistic approach. The probabilistic models used to predict the progress of the Covid-19 epidemic are conceptually very similar to those used by climatologists, to imagine future environmental scenarios based on the actions taken in the present.  As human beings we can intervene in both systems. Based on the choices we will make and the fluctuations of the systems, we can predict different responses. In the context of the emergency that we faced, the collaboration between different scientific fields was therefore fundamental, which, by comparing themselves, were able to provide more accurate answers. Furthermore, a close collaboration has arisen between epidemiologists and climatologists. A beautiful synergy that can give a great help to society in a difficult moment.

References

-Faranda, Castillo, Hulme, Jezequel, Lamb, Sato & Thompson (2020). Chaos: An Interdisciplinary Journal of Nonlinear Science30(5), 051107.

-Alberti & Faranda (2020).  Communications in Nonlinear Science and Numerical Simulation90, 105372.

-Faranda & Alberti (2020). Chaos: An Interdisciplinary Journal of Nonlinear Science30(11), 111101.

-Faranda, Alberti, Arutkin, Lembo, Lucarini. (2021).  Chaos: An Interdisciplinary Journal of Nonlinear Science31(4), 041105.

-Arutkin, Faranda, Alberti, & Vallée. (2021). Chaos: An Interdisciplinary Journal of Nonlinear Science31(10), 101107.

How to cite: Faranda, D.: How concepts and ideas from Statistical and Climate physics improve epidemiological modelling of the COVID 19 pandemics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2801, https://doi.org/10.5194/egusphere-egu22-2801, 2022.

EGU22-3690 | Presentations | ITS3.5/NP3.1

Improving the conservation of virus infectivity during airborne exposure experiments 

Ghislain Motos, Kalliopi Violaki, Aline Schaub, Shannon David, Tamar Kohn, and Athanasios Nenes

Recurrent epidemic outbreaks such as the seasonal flu and the ongoing COVID-19 are disastrous events to our societies both in terms of fatalities, social and educational structures, and financial losses. The difficulty to control COVID-19 spread in the last two years has brought evidence that basic mechanisms of transmission for such pathogens are still poorly understood.

             Three different routes of virus transmission are known: direct contact (e.g. through handshakes) and indirect contact through fomites; ballistic droplets produced by speaking, sneezing or coughing; and airborne transmission through aerosols which can also be produced by normal breathing. The latter route, which has long been ignored, even by the World Health Organization during the COVID-19 pandemics, now appears to play the predominant role in the spread of airborne diseases (e.g. Chen et al., 2020).

             Further scientific research thus needs to be conducted to better understand the mechanistic processes that lead to inactivate airborne viruses, as well as the environmental conditions which favour these processes. In addition to modelling and epidemiological studies, chamber experiments, where viruses are exposed to various types of humidity, temperature and/or UV dose, offer to simulate everyday life conditions for virus transmission. However, the current standard instrumental solutions for virus aerosolization to the chamber and sampling from it use high fluid forces and recirculation which can cause infectivity losses (Alsved et al., 2020) and also do not compare to the relevant production of airborne aerosol in the respiratory tract.

             In this study, we utilized two of the softest aerosolization and sampling techniques: the sparging liquid aerosol generator (SLAG, CH Technologies Inc., Westwood, NJ, USA), which forms aerosol from a liquid suspension by bubble bursting, thus mimicking natural aerosol formation in wet environments (e.g. the respiratory system but also lakes, sea, toilets, etc…); and the viable virus aerosol sampler (BioSpot-VIVAS, Aerosol Devices Inc., Fort Collins, CO, USA), which grows particle via water vapour condensation to gently collect them down to a few nanometres in size. We characterized these systems with particle sizers and biological analysers using non-pathogenic viruses such as bacteriophages suspended in surrogate lung fluid and artificial saliva. We compared the size distribution of produced aerosol from these suspensions against similar distributions generated with standard nebulizers, and assess the ability of these devices to produce aerosol that much more resembles that produced in human exhaled air. We also assess the conservation of viral infectivity with the VIVAS vs. conventional biosamplers.

 

Acknowledgment

 

We acknowledge the IVEA project in the framework of SINERGIA grant (Swiss National Science Foundation)

 

References

 

Alsved, M., Bourouiba, L., Duchaine, C., Löndahl, J., Marr, L. C., Parker, S. T., Prussin, A. J., and Thomas, R. J. (2020): Natural sources and experimental generation of bioaerosols: Challenges and perspectives, Aerosol Science and Technology, 54, 547–571.

Chen, W., Zhang, N., Wei, J., Yen, H.-L., and Li, Y. (2020): Short-range airborne route dominates exposure of respiratory infection during close contact, Building and Environment, 176, 106859.

How to cite: Motos, G., Violaki, K., Schaub, A., David, S., Kohn, T., and Nenes, A.: Improving the conservation of virus infectivity during airborne exposure experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3690, https://doi.org/10.5194/egusphere-egu22-3690, 2022.

EGU22-3936 | Presentations | ITS3.5/NP3.1

COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest 

Jean-Francois Oehler, Alexandre Leon, Sylvain Lucas, André Lusven, and Gildas Delachienne

COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest (Brittany, France)

Jean-François OEHLER1, Sylvain LUCAS1, Alexandre LEON1, André LUSVEN1, Gildas DELACHIENNE1

1Shom (Service Hydrographique et Océanographique de la Marine), Brest, France

 

Since September 2019, Shom’s Magnetic Station (SMS) has been deployed in the north neighbourhoods of the medium-sized city of Brest (Brittany, France, about 210,000 inhabitants). SMS continuously measures the intensity of the Earth Magnetic Field (EMF) with an absolute Overhauser sensor. The main goal of SMS is to derive local external variations of the EMF mainly due to solar activity. These variations consist of low and high parasitic frequencies in magnetic data and need to be corrected. Magnetic mobile stations or permanent observatories are usually installed in isolated areas, far from human activities and electromagnetic effects. It is clearly not the case for SMS, mainly for practical reasons of security, maintenance and data accessibility. However, despite its location in an urbanized area, SMS stays the far western reference station for processing marine magnetic data collected along the Atlantic and Channel coasts of France.

The corona pandemic has had unexpected consequences on the quality of measurements collected by SMS. For example, during the French first lockdown between March and May 2020, the noise level significantly decreased of about 50%. Average standard deviations computed on 1 Hz-time series over 1 min. periods fell from about 1.5 nT to 0.8 nT. This more stable behavior of SMS is clearly correlated with the drop of human activities and traffic in the city of Brest.

 

Keywords: Shom’s Magnetic Station (SMS), Earth Magnetic Field, COVID19.

 

How to cite: Oehler, J.-F., Leon, A., Lucas, S., Lusven, A., and Delachienne, G.: COVID-19 effects on measurements of the Earth Magnetic Field in the urbanized area of Brest, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3936, https://doi.org/10.5194/egusphere-egu22-3936, 2022.

Economic activities and the associated emissions have significantly declined during the 2019 novel coronavirus (COVID-19) pandemic, which has created a natural experiment to assess the impact of the emitted precursor control policy on ozone (O3) pollution. In this study, we utilized comprehensive satellite, ground-level observations, and source-oriented chemical transport modeling to investigate the O3 variations during the COVID-19 pandemic in China. Here, we found that the significant elevated O3 in the North China Plain (40%) and Yangtze River Delta (35%) were mainly attributed to the enhanced atmospheric oxidation capacity (AOC) in these regions, associated with the meteorology and emission reduction during lockdown. Besides, O3 formation regimes shifted from VOC-limited regimes to NOx-limited and transition regimes with the decline of NOx during lockdown. We suggest that future O3 control policies should comprehensively consider the effects of AOC on the O3 elevation and coordinated regulations of the O3 precursor emissions.

How to cite: Wang, P., Zhu, S., and Zhang, H.: Comprehensive Insights Into O3 Changes During the COVID-19 From O3 Formation Regime and Atmospheric Oxidation Capacity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4170, https://doi.org/10.5194/egusphere-egu22-4170, 2022.

EGU22-5126 | Presentations | ITS3.5/NP3.1

Nature-based Solutions in actions: improving landscape connectivity during the COVID-19 

Yangzi Qiu, Ioulia Tchiguirinskaia, and Daniel Schertzer

In the last few decades, Nature-based Solutions (NBS) has become widely considered a sustainable development strategy for the development of urban environments. Assessing the performances of NBS is significant for understanding their efficiency in addressing a large range of natural and societal challenges, such as climate change, ecosystem services and human health. With the rapid onset of the COVID-19 pandemic, the inner relationship between humans and nature becomes apparent. However, the current catchment management mainly focuses on reducing hydro-meteorological and/or climatological risks and improving urban climate resilience. This single-dimensional management seems insufficient when facing epidemics, and multi-dimensional management (e.g., reduce zoonosis) is necessary. With this respect, policymakers pay more attention to NBS. Hence, it is significant to increase the connectivity of the landscape to improve the ecosystem services and reduce the health risks from COVID-19 with the help of NBS.

This study takes the Guyancourt catchment as an example. The selected catchment is located in the Southwest suburb of Paris, with a total area of around 5.2 km2. The ArcGIS software is used to assess the patterns of structural landscape connectivity, and the heterogeneous spatial distribution of current green spaces over the catchment is quantified with the help of the scale-independent indicator of fractal dimension. To quantify opportunities to increase landscape connectivity over the catchment, a least-cost path approach to map potential NBS links urban green spaces through vacant parcels, alleys, and smaller green spaces. Finally, to prioritise these potential NBS in multiscale, a new scale-independent indicator within the Universal Multifractal framework is proposed in this study.

The results indicated that NBS can effectively improve the connectivity of the landscape and has the potential to reduce the physical and mental risks caused by COVID-19. Overall, this study proposed a scale-independent approach for enhancing the multiscale connectivity of the NBS network in urban areas and providing quantitative suggestions for on-site redevelopment.

How to cite: Qiu, Y., Tchiguirinskaia, I., and Schertzer, D.: Nature-based Solutions in actions: improving landscape connectivity during the COVID-19, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5126, https://doi.org/10.5194/egusphere-egu22-5126, 2022.

EGU22-5150 | Presentations | ITS3.5/NP3.1

The associations between environmental factors and COVID-19: early evidence from China 

Xia Meng, Ye Yao, Weibing Wang, and Haidong Kan

The Coronavirus (COVID-19) epidemic, which was first reported in December 2019 in Wuhan, China, has been becoming one of the most important public health issues worldwide. Previous studies have shown the importance of weather variables and air pollution in the transmission or prognosis of infectious diseases, including, but not limited to, influenza and severe acute respiratory syndrome (SARS). In the early stage of the COVID-19 epidemic, there was intense debate and inconsistent results on whether environmental factors were associated with the spread and prognosis of COVID-19. Therefore, our team conducted a series studies to explore the associations between atmospheric parameters (temperature, humidity, UV radiation, particulate matters and nitrogen dioxygen) and the COVID-19 (transmission ability and prognosis) at the early stage of the COVID-19 epidemic with data in early 2020 in China and worldwide. Our results showed that meteorological conditions (temperature, humidity and UV radiation) had no significant associations with cumulative incidence rate or R0 of COVID-19 based on data from 224 Chinese cities, or based on data of 202 locations of 8 countries before March 9, 2020, suggesting that the spread ability of COVID-19 among public population would not significantly change with increasing temperature or UV radiation or changes of humidity. Moreover, we found that particulate matter pollution significantly associated with case fatality rate (CFR) of COVID-19 in 49 Chinese cities based on data before April 12, 2020, indicating that air pollution might exacerbate negative prognosis of COVID-19. Our studies provided an environmental perspective for the prevention and treatment of COVID-19.

How to cite: Meng, X., Yao, Y., Wang, W., and Kan, H.: The associations between environmental factors and COVID-19: early evidence from China, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5150, https://doi.org/10.5194/egusphere-egu22-5150, 2022.

EGU22-9213 | Presentations | ITS3.5/NP3.1

The Effects of COVID-19 Lockdown on Air Quality and Health in India and Finland 

Shubham Sharma, Behzad Heibati, Jagriti Suneja, and Sri Harsha Kota

The COVID-19 lockdowns worldwide provided a prospect to evaluate the impacts of restricted movements and emissions on air quality. In this study, we analyze the data obtained from the ground-based observation stations for six air pollutants (PM10, PM2.5, CO, NO2, O3 and SO2) and meteorological parameters from March 25th to May 31st in 22 cities representative of five regions of India and from March 16th to May 14th in 21 districts of Finland from 2017 to 2020. The NO2 concentrations dropped significantly during all phases apart from East India's exception during phase 1. O3 concentrations for all four phases in West India reduced significantly, with the highest during Phase 2 (~38%). The PM2.5 concentration nearly halved across India during all phases except South India, where a very marginal reduction (2%) was observed during Phase 4. SO2 (~31%) and CO (~41%) concentrations also reduced noticeably in South India and North India during all the phases. The air temperature rose by ~10% (average) during all the phases across India when compared to 2017-2019. In Finland, NO2 concentration reduced substantially in 2020. Apart from Phase 1, the concentrations of PM10 and PM2.5 reduced markedly in all the Phases across Finland. Also, O3 and SO2 concentrations stayed within the permissible limits in the study period for all four years but were highest in 2017 in Finland, while the sulfurous compounds (OSCs) levels increased during all the phases across Finland. The changes in the mobility patterns were also assessed and were observed to have reduced significantly during the lockdown. The benefits in the overall mortality due to the reduction in the concentrations of PM2.5 have also been estimated for India and Finland. Therefore, this research illustrates the effectiveness of lockdown and provides timely policy suggestions to the regulators to implement interventions to improve air quality.

How to cite: Sharma, S., Heibati, B., Suneja, J., and Kota, S. H.: The Effects of COVID-19 Lockdown on Air Quality and Health in India and Finland, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9213, https://doi.org/10.5194/egusphere-egu22-9213, 2022.

EGU22-9812 | Presentations | ITS3.5/NP3.1

Changes in Global Urban Air Quality due to Large Scale Disruptions of Activity 

Will Drysdale, Charlotte Stapleton, and James Lee

Since 2020, countries around the world have implemented various interventions in response to a global public health crisis. The interventions included restrictions on mobility, promotion of working from home and the limiting of local and international travel. These, along with other behavioural changes from people in response to the crisis affected various sources of air pollution, not least the transport sector. Whilst the method through which these changes were implemented is not something to be repeated, understanding the effects of the changes will help direct policy for further improving air quality. 

 

We analysed NOx, O3 and PM2.5 data from many 100s of air quality monitoring sites in urban areas around the world, and examined 2020 in relation to the previous 5 years. The data were examined alongside mobility metrics to contextualise the magnitude of changes and were viewed through the lens of World Health Organisation guidelines as a metric to link air quality changes with human health. Interestingly, reductions in polluting activities did not lead to wholesale improvements in air quality by all metrics due to the more complex processes involved with tropospheric O3 production.

 

How to cite: Drysdale, W., Stapleton, C., and Lee, J.: Changes in Global Urban Air Quality due to Large Scale Disruptions of Activity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9812, https://doi.org/10.5194/egusphere-egu22-9812, 2022.

EGU22-11475 | Presentations | ITS3.5/NP3.1

Scaling Dynamics of Growth Phenomena: from Epidemics to the Resilience of Urban Systems 

Ioulia Tchiguirinskaia and Daniel Schertzer

Defining optimal COVID-19 mitigation strategies remains at the top of public health agendas around the world. It requires a better understanding and refined modeling of the intrinsic dynamics of the epidemic. The common root of most models of epidemics is a cascade paradigm that dates to their emergence with Bernoulli and d’Alembert, which predated Richardson’s famous quatrain on the cascade of atmospheric dynamics. However, unlike other cascade processes, the characteristic times of a cascade of contacts that spread infection and the corresponding rates are believed to be independent on the cascade level. This assumption prevents having cascades of scaling contamination.

In this presentation, we theoretically argue and empirically demonstrate that the intrinsic dynamics of the COVID-19 epidemic during the phases of growth and decline, is a cascade with a rather universal scaling, the statistics of which differ significantly from those of an exponential process. This result first confirms the possibility of having a higher prevalence of intrinsic dynamics, resulting in slower but potentially longer phases of growth and decline. It also shows that a fairly simple transformation connects the two phases. It thus explains the frequent deviations of epidemic models rather aligned with exponential growth and it makes it possible to distinguish an epidemic decline from a change of scaling in the observed growth rates. The resulting variability across spatiotemporal scales is a major feature that requires alternative approaches with practical consequences for data analysis and modelling. We illustrate some of these consequences using the now famous database from the Johns Hopkins University Center for Systems Science and Engineering.

Due to the significant increase over time of available data, we are no longer limited to deterministic calculus. The non-negligible fluctuations with respect to a power-law can be easily explained within the framework of stochastic multiplicative cascades. These processes are exponentials of a stochastic generators Γ(t), whose stochastic differentiation remains quite close to the deterministic one, basically adding a supplementary term σdt to the differential of the generator. When the generator Γ(t) is Gaussian, σ is the “quadratic variation”. Extensions to Lévy stable generators, which are strongly non-Gaussian, have also been considered. To study the stochastic nature of the cascade generator, as well as how it respects the above-mentioned symmetry between the phases of growth and decline, we use the universal multifractals. They provide the appropriate framework for joint scaling analysis of vector-valued time series and for introducing location and other dependencies. This corresponds to enlarging the domain, on which the process and its generator are defined, as well as their co-domain, on which they are valued. These clarifications should make it possible to improve epidemic models and their statistical analysis.

More fundamentally, this study points out to a new class of stochastic multiplicative cascade models of epidemics in space and time, therefore not limited to compartments. By their generality, these results pave the way for a renewed approach to epidemics, and more generally growth phenomena, towards more resilient development and management of our urban systems.

How to cite: Tchiguirinskaia, I. and Schertzer, D.: Scaling Dynamics of Growth Phenomena: from Epidemics to the Resilience of Urban Systems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11475, https://doi.org/10.5194/egusphere-egu22-11475, 2022.

EGU22-11584 | Presentations | ITS3.5/NP3.1

Geophysicists facing Covid-19 

Daniel Schertzer, Vijay Dimri, and Klaus Fraedrich

There have been a series of sessions on the generic theme of “Covid-19 and Geosciences” on the occasion of AGU, AOGS and EGU conferences, since 2020 including during the first lockdown that required a very fast adaptation to unprecedented health measures. We think it is interesting and useful to have an overview of these sessions and try to capture what could be the lessons to learn.

To our knowledge, the very first such session was the Great e-Debate “Epidemics, Urban Systems and Geosciences” (https://hmco.enpc.fr/news-and-events/great-e-debate-epidemics-urban-systems-and-geosciences-invitations-and-replays/). It was virtually organised with the help of the UNESCO UniTwin CS-DC (Complex Systems Digital Campus) thanks to its expertise in organising e-conferences long before the pandemic and the first health measures. This would not have been possible without the strong personal involvement of its chair Paul Bourgine. It was held on Monday 4th May on the occasion of the 2020 EGU conference, which became virtual under the title “EGU2020: Sharing Geoscience Online” (4-8 May 2020). The Great e-Debate did not succeed in being granted as an official session of this conference, despite the fact that the technology used (Blue Button) by the Great e-Debate was much more advanced. Nevertheless, it was clearly an extension of the EGU session ITS2.10 / NP3.3: “Urban Geoscience Complexity: Transdisciplinarity for the Urban Transition”. 

Thanks to a later venue (7-11 December 2020) and the existence of a GeoHealth section of the AGU, the organisation of several regular sessions for the 2020 Fall Meeting was easier. For EGU 2021 (19-30 April 2021), a sub-part of the  inter- transdisciplinary sessions ITS1 “Geosciences and health during the Covid pandemic”, a Union Session US “Post-Covid Geosciences” and a Townhall meeting TM10 “Covid-19 and other epidemics: engagement of the geoscience communities” were organised. A brief of the special session SS02 “Covid-19 and Geoscience” of the (virtual) 18th Annual Meeting of AOGS (1-6 August 2021) is included in the proceedings of this conference (in press). 

We will review materials generated by these sessions that rather show a shift from a focus on the broad range of scientific responses to the pandemic, to which geoscientists could contribute with their specific expertise (from data collection to theoretical modelling), to an expression of concerns about the broad impacts on the geophysical communities that appear to be increasingly long-term and constitute a major transformation of community functioning (e.g., again data collection, knowledge transfer).

How to cite: Schertzer, D., Dimri, V., and Fraedrich, K.: Geophysicists facing Covid-19, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11584, https://doi.org/10.5194/egusphere-egu22-11584, 2022.

EGU22-11747 | Presentations | ITS3.5/NP3.1

To act or not to act. Predictability of intervention and non-intervention in health and environment 

Michalis Chiotinis, Panayiotis Dimitriadis, Theano Illiopoulou, Nikos Mamassis, and Demetris Koutsoyiannis

The COVID-19 pandemic has brought forth the question of the need for draconian interventions before concrete evidence for their need and efficacy is presented. Such interventions could be critical if necessary for avoiding threats, or a threat in themselves if harms caused by the intervention are significant.

The interdisciplinary nature of such issues as well as the unpredictability of various local responses considering their potential for global impact further complicate the question.

The study aims to review the available evidence and discuss the problem of weighting the predictability of interventions vis-à-vis their intended results against the limits of knowability regarding complex non-linear systems and thus the predictability in non-interventionist approaches.

How to cite: Chiotinis, M., Dimitriadis, P., Illiopoulou, T., Mamassis, N., and Koutsoyiannis, D.: To act or not to act. Predictability of intervention and non-intervention in health and environment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11747, https://doi.org/10.5194/egusphere-egu22-11747, 2022.

EGU22-12302 | Presentations | ITS3.5/NP3.1

COVID-19 waves: intrinsic and extrinsic spatio-temporal dynamics over Italy 

Tommaso Alberti and Davide Faranda

COVID-19 waves, mostly due to variants, still require timely efforts from governments based on real-time forecasts of the epidemics via dynamical and statistical models. Nevertheless, less attention has been paid in investigating and characterizing the intrinsic and extrinsic spatio-temporal dynamics of the epidemic spread. The large amount of data, both in terms of data points and observables, allows us to perform a detailed characteristic of the epidemic waves and their relation with different sources as testing capabilities, vaccination policies, and restriction measures.

By taking as a case-study the epidemic evolution of COVID-19 across Italian regions we perform the Hilbert-Huang Transform (HHT) analysis to investigate its spatio-temporal dynamics. We identified a similar number of temporal components within all Italian regions that can be linked to both intrisic and extrinsic source mechanisms as the efficiency of restriction measures, testing strategies and performances, and vaccination policies. We also identified mutual scale-dependent relations within different regions, thus suggesting an additional source mechanisms related to the delayed spread of the epidemics due to travels and movements of people. Our results are also extremely helpful for providing long term extrapolation of epidemics counts by taking into account both the intrinsically and the extrinsically non-linear nature of the underlying dynamics. 

How to cite: Alberti, T. and Faranda, D.: COVID-19 waves: intrinsic and extrinsic spatio-temporal dynamics over Italy, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12302, https://doi.org/10.5194/egusphere-egu22-12302, 2022.

Black carbon (BC) not only warms the atmosphere but also affects human health. The nationwide lockdown due to the COVID-19 pandemic led to a major reduction in human activity during the past thirty years. Here, the concentration of BC in the urban, urban-industry, suburb, and rural areas of a megacity Hangzhou were monitored using a multi-wavelength Aethalometer to estimate the impact of the COVID-19 lockdown on BC emissions. The citywide BC decreased by 44% from 2.30 μg/m3 to 1.29 μg/m3 following the COVID-19 lockdown period. The source apportionment based on the Aethalometer model shows that vehicle emission reduction responded to BC decline in the urban area and biomass burning in rural areas around the megacity had a regional contribution of BC. We highlight that the emission controls of vehicles in urban areas and biomass burning in rural areas should be more efficient in reducing BC in the megacity Hangzhou.

How to cite: Li, W. and Xu, L.: Responses of concentration and sources of black carbon in a megacity during the COVID-19 pandemic, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12907, https://doi.org/10.5194/egusphere-egu22-12907, 2022.

For many of us, the Covid-19 pandemic brought long-time scientific interest in epidemiology to the point of involvement. An important aspect of the evolution of acute respiratory epidemics is their seasonal character. Our toolkit for handling seasonal phenomena in the geosciences has increased in the last dozen years or so with the development and application of concepts and methods from the theory of nonautonomous and random dynamical systems (NDSs and RDSs). In this talk, I will briefly:

  • Introduce some elements of these two closely related theories.

  • Illustrate the two with an application to seasonal effects within a chaotic model of the El

    Niño–Southern Oscillation (ENSO).

  • Introduce to a geoscientific audience a simple epidemiological “box” model of the

    Susceptible–Exposed–Infectious–Recovered (SEIR) type.

  • Summarize NDS results for a chaotic SEIR model with seasonal effects.

  • Mention the utility of data assimilation (DA) tools in the parameter identification and

    prediction of an epidemic’s evolution

    References

    - Chekroun, M D, Ghil M, Neelin J D (2018) Pullback attractor crisis in a delay differential ENSO model, in Nonlinear Advances in Geosciences, A. Tsonis (Ed.), Springer, pp. 1–33, doi: 10.1007/978-3-319-58895-7

    - Crisan D, Ghil, M (2022) Asymptotic behavior of the forecast–assimilation process with unstable dynamics, Chaos, in preparation

    - Faranda D, Castillo I P, Hulme O, Jezequel A, Lamb J S, Sato Y, Thompson E L (2020) Asymptotic estimates of SARS-CoV-2 infection counts and their sensitivity to stochastic perturbation<? Chaos, 30(5): 051107, doi: 10.1063/5.0009454

    - Ghil, M (2019) A century of nonlinearity in the geosciences. Earth & Space Science 6:1007–1042, doi:10.1029/2019EA000599

    - Kovács, T (2020) How can contemporary climate research help understand epidemic dynamics? Ensemble approach and snapshot attractors. J. Roy. Soc. Interface, 17(173):20200648, doi: 10.1098/rsif.2020.0648

How to cite: Ghil, M.: Time-dependent forcing in the geosciences and in epidemiology, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13522, https://doi.org/10.5194/egusphere-egu22-13522, 2022.

Standard epidemic models based on compartmental differential equations are investigated under continuous parameter change as external forcing. We show that seasonal modulation of the contact parameter superimposed upon a monotonic decay needs a different description from that of the standard chaotic dynamics. The concept of snapshot attractors and their natural distribution has been adopted from the field of the latest climate change research. This shows the importance of the finite-time chaotic effect and ensemble interpretation while investigating the spread of a disease. By defining statistical measures over the ensemble, we can interpret the internal variability of the
epidemic as the onset of complex dynamics—even for those values of contact parameters where originally regular behaviour is expected. We argue that anomalous outbreaks of the infectious class cannot die out until transient chaos is presented in the system. Nevertheless, this fact becomes apparent by using an ensemble approach rather than a single trajectory representation. These findings are applicable generally in explicitly time-dependent epidemic systems regardless of parameter values and time scales.

How to cite: Kovács, T.: How can contemporary climate research help understand epidemic dynamics? -- Ensemble approach and snapshot attractors, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13534, https://doi.org/10.5194/egusphere-egu22-13534, 2022.

CC BY 4.0